Unable to access the value of extra-vars in debug statement - jenkins

The below command is executed via Jenkins but I am not able to print the value of "JENKINS_BUILD_NUMBER" in my debug statement.
ansible-playbook -i /etc/ansible/inventory/development/hosts /etc/ansible/test.yml -e JENKINS_BUILD_NUMBER=$BUILD_NUMBER
The task content looks like the following:
# cat tasks/main.yml
---
- debug:
msg: "{{ JENKINS_BUILD_NUMBER }}"
Jenkins console log shows the below result:
# ansible-playbook -i /etc/ansible/inventory/development/hosts /etc/ansible/test.yml -e JENKINS_BUILD_NUMBER=$BUILD_NUMBER
[SSH] executing...
PLAY [Testing] *****************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test : debug] ************************************************************
ok: [localhost] => {
"msg": ""
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[SSH] completed
[SSH] exit-status: 0
Finished: SUCCESS
Additional Info:
Tried --extra-vars and -e both doesn't seem to work
Tried with "JENKINS_BUILD_NUMBER=$BUILD_NUMBER" and without quotes but both doesn't give any results.

-e doesn't seem to work for me either. Try doing --extra-vars "JENKINS_BUILD_NUMBER=$BUILD_NUMBER" instead.

Related

Jenkins ERROR: script returned exit code 4

I got a simple Groovy script to install agents on my servers using Ansible.
After I run the pipeline I get error about
ERROR: script returned exit code 4
Finished: FAILURE
The error happens because I have two instances not running (I don't want them running) and I get connection time out from them.
Is there a way to get Jenkins to ignore such errors?
A not-so-ideal solution would be to just state ignore_unreachable: yes at the top of you playbook.
This is no ideal because you risk missing on unreachable hosts you do care about.
A possibly better solution would be to gracefully end those unreachable hosts in a meta task based on a list of host(s) you don't need up and running.
For example:
- hosts: localhost, ok-if-down
gather_facts: no
pre_tasks:
- ping:
ignore_unreachable: yes
register: ping
- meta: end_host
when:
- inventory_hostname in _possibly_unreachable_hosts
- ping is unreachable
vars:
_possibly_unreachable_hosts:
- ok-if-down
## add more host(s) name in this list, here
tasks:
## here goes your current tasks
When run, the exit code of this playbook would be 0:
$ ansible-playbook play.yml; echo "Return code is $?"
PLAY [localhost, ok-if-down] **************************************************
TASK [ping] *******************************************************************
fatal: [ok-if-down]: UNREACHABLE! => changed=false
msg: 'Failed to connect to the host via ssh: ssh: Could not resolve hostname ok-if-down: Name does not resolve'
skip_reason: Host ok-if-down is unreachable
unreachable: true
ok: [localhost]
TASK [meta] *******************************************************************
skipping: [localhost]
TASK [meta] *******************************************************************
PLAY RECAP ********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ok-if-down : ok=0 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=0
Return code is 0

Failing to start nginx container when volumes is used (using ansible and docker-compose)

I am trying to start an nginx container using ansible with docker-compose from one machine to a different machine.
Whenever I include nginx.conf to the volumes, there is an error which I do not understand. The container is only created but not starting.
MACHINE-1
Command to run the playbook: ansible-playbook -v nginx-playbook.yml -l ubuntu_node_1 -u root
my playbook:
- name: nginx-docker_compose
hosts: all
gather_facts: yes
become: yes
tasks:
- community.general.docker_compose:
project_name: nginx
definition:
version: '2'
services:
web:
image: nginx:latest
volumes:
- ./vars/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "8080:80"
[EDITED]
Here is the error:
Using /etc/ansible/ansible.cfg as config file
PLAY [nginx-docker_compose] ********************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 172.31.15.176 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior
Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [172.31.15.176]
TASK [community.general.docker_compose] ********************************************************************************************************************************
fatal: [172.31.15.176]: FAILED! => {"changed": false, "errors": [], "module_stderr": "Recreating nginx_web_1 ... \n\u001b[1A\u001b[2K\nRecreating nginx_web_1 ... \n\u001b[1B", "module_stdout": "", "msg": "Error starting project Encountered errors while bringing up the project."}
PLAY RECAP *************************************************************************************************************************************************************
172.31.15.176 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
[root#ip-172-31-12-130 docker_server]# ansible-playbook -v nginx-playbook.yml -l ubuntu_node_1 -u root
Using /etc/ansible/ansible.cfg as config file
PLAY [nginx-docker_compose] ********************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 172.31.15.176 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior
Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [172.31.15.176]
TASK [community.general.docker_compose] ********************************************************************************************************************************
fatal: [172.31.15.176]: FAILED! => {"changed": false, "errors": [], "module_stderr": "Recreating 9b102bbf98c2_nginx_web_1 ... \n\u001b[1A\u001b[2K\nRecreating 9b102bbf98c2_nginx_web_1 ... \n\u001b[1B", "module_stdout": "", "msg": "Error starting project Encountered errors while bringing up the project."}
PLAY RECAP *************************************************************************************************************************************************************
172.31.15.176 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOTE: When I try to run nginx container directly using docker-compose with the same config on MACHINE-2, it works.
I believe there are some permission issues happening while trying to execute the playbook from MACHINE-1 to MACHINE-2 but can not figure it out.
It works now. Thanks to #mdaniel.
Things I changed:
I wrote the entire directory in the playbook- /home/some_more_folders/nginx.conf
and copied the same file with same directory structure on the destination machine.
Still open questions:
Any idea why is it necessary to copy any file to the destination machine (such as nginx.conf)?
How this manual process of copying of config files to destination machine for docker-compose be automated?

filter ip adress of docker container via ansible

I'm trying to get an ip adress from an docker container w/ ansible-module docker_container_info.
Following is my suspection how it would be get excluded from the result.
- name: Get infos on container
docker_container_info:
name: nextcloud-db
register: result_container
- name: Dump grep matching interfaces from ansible_interfaces
set_fact:
interfaces_list: "{{ result_container | select('match', '^IPAddress') }}"
- debug:
var: result_container
- debug:
var: interfaces_list
while trying this i get this error
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"interfaces_list": "<generator object select_or_reject at 0x7f2bb30d55a0>"
}
How do i extract the ip address from this result else?
The goal is to create an variable that i can use later to dump an database and import it to another docker container.
The below should works for me:
- hosts: localhost
sudo: yes
tasks:
- name: Get infos on container
docker_container_info:
name: <some_name>
register: result
- debug:
var: result | json_query('container.NetworkSettings.[IPAddress]')
play output:
PLAY [localhost] ***********************************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [localhost]
TASK [Get infos on container] **********************************************************************************************************************************************
ok: [localhost]
TASK [debug] ***************************************************************************************************************************************************************
ok: [localhost] => {
"result | json_query('container.NetworkSettings.[IPAddress]')": [
"172.17.0.2"
]
}
PLAY RECAP *****************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible Skipping Docker Build

Trying to get Ansible set up to learn about it, so could be a very simple mistake but I can't find the answer to it anywhere. When I try to run ansible-playbook it's just simply skipping the job with the following output:
ansible-playbook -i hosts simple-devops-image.yml --check
PLAY [all] ***********************************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host 127.0.0.1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more
information.
ok: [127.0.0.1]
TASK [build docker image using war file] *****************************************************************************************************************************************************************************************************************************************************
skipping: [127.0.0.1]
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
My .yml playbook file:
---
- hosts: all
become: yes
tasks:
- name: build docker image using war file
command: docker build -t simple-devops-image .
args:
chdir: /usr/local/src
My hosts file:
[localhost]
127.0.0.1 ansible_connection=local
command module is skipped when executing with check mode. Remove —check from ansible-playbook command to build docker image.
Here is a note from the doc:
Check mode is supported when passing creates or removes. If running in check mode and either of these are specified, the module will check for the existence of the file and report the correct changed status. If these are not supplied, the task will be skipped.

Trying to install but getting an error "Failed to connect to the host via ssh: Permission denied (publickey,password)"

I am able to connect to the other nodes with SSH without a password. I have followed the IBM KC instructions. Here is the command and results:
ubuntu#ipc1:/opt/ibm-cloud-private-ce-3.1.0/cluster$ sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:3.1.0 install
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
fatal: [172.31.39.234]: UNREACHABLE! => changed=false
Failed to connect to the host via ssh: Permission denied (publickey,password).
unreachable: true
fatal: [172.31.39.53]: UNREACHABLE! => changed=false
msg: Failed to connect to the host via ssh: Permission denied (publickey,password)
unreachable: true
fatal: [172.31.44.240]: UNREACHABLE! => changed=false
msg: ed to connect to the host via ssh: Permission denied (publickey,password).
unreachable: true
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
172.31.39.234 : ok=0 changed=0 unreachable=1 failed=0
172.31.39.53 : ok=0 changed=0 unreachable=1 failed=0
172.31.44.240 : ok=0 changed=0 unreachable=1 failed=0
Playbook run took 0 days, 0 hours, 0 minutes, 0 seconds
Can you ssh between hosts without specifiy any password?
Any by using sudo that mean you are trying to ssh passwordless via root.
So I think you haven't copy the root ssh key between your hosts.
Good Luck
You are facing this issue because you have not generated the password less authentication within the same server(self ssh).
Follow these steps and you will be able to get rid of the issue specified above.
[root#localhost ~]# ssh-keygen
Sample Output:
Then run the following commands:
[root#localhost ~]# touch ~/.ssh/authorized_keys
[root#localhost ~]# chmod 600 ~/.ssh/authorized_keys
[root#localhost ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root#localhost ~]# cd /opt/ibm-cloud-private-ce-3.1.0/cluster/
[root#localhost ~]# cp -rp ~/.ssh/id_rsa ./ssh_key
Also make sure that hostname is mapped to the host's IP address in the local /etc/hosts.
Before you install an IBM Cloud Private cluster, you must configure authentication between configuration nodes. You can generate an SSH key pair on your boot node and share that key with the other cluster nodes. To share the key with the cluster nodes, you must have the access to an account with root access for each node in your cluster.
Follow the ICP 3.1.0 Knowledge Center (KC) steps here:
https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/ssh_keys.html

Resources