How to add when or if condition on Ansible Variable - jenkins

It is possible to add when or if condition on Ansible variable?
For example we have 2 jenkins server (server A and server B), we want to apply different plugins version for both of them.
Example:
jenkins_plugins:
- name: plugin-x
version: "1.1" {"if server == A"}
- name: plugin-x
version: "1.5" {"if server == B"}
Thanks.
I want to apply different plugins version to different servers.

Create the dictionary
jenkins_plugins:
A:
- name: plugin-x
version: 1.1
B:
- name: plugin-x
version: 1.5
Use it
- debug:
var: jenkins_plugins[inventory_hostname]
Notes
Given the inventory
shell> cat hosts
A
B
Example of a complete playbook for testing
- hosts: all
vars:
jenkins_plugins:
A:
- name: plugin-x
version: 1.1
B:
- name: plugin-x
version: 1.5
tasks:
- debug:
var: jenkins_plugins[inventory_hostname]
gives
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [A] =>
jenkins_plugins[inventory_hostname]:
- name: plugin-x
version: 1.1
ok: [B] =>
jenkins_plugins[inventory_hostname]:
- name: plugin-x
version: 1.5
PLAY RECAP ***********************************************************************************
A: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
B: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Elegant and clean option is putting the variables into the group_vars. For example,
shell> cat group_vars/all/jenkins_plugins.yml
jenkins_plugins_dict:
default:
- name: plugin-x
version: 1.0
A:
- name: plugin-x
version: 1.1
B:
- name: plugin-x
version: 1.5
jenkins_plugins: "{{ jenkins_plugins_dict[inventory_hostname]|
default(jenkins_plugins_dict.default) }}"
Then, given the inventory
shell> cat hosts
A
B
X
The playbook
- hosts: all
tasks:
- debug:
var: jenkins_plugins
gives
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [B] =>
jenkins_plugins:
- name: plugin-x
version: 1.5
ok: [A] =>
jenkins_plugins:
- name: plugin-x
version: 1.1
ok: [X] =>
jenkins_plugins:
- name: plugin-x
version: 1.0
PLAY RECAP ***********************************************************************************
A: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
B: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
X: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible can hold different variables for each of your servers/groups in your inventory out of the box and this can actually be much more powerfull and maintainable than your above condition.
As an illustration of How to build your inventory, and to go a bit beyond your actual example let's imagine you have a default list of plugins for all jenkins servers and you want to override the version for some of them or add additional ones on specific server.
(Note: this example will not address removing specific plugins from the default but that is also possible...)
Here's my example global file structure:
.
├── check_plugins_versions.yml
└── inventories
└── demo
├── group_vars
│   └── jenkins
│   └── plugins.yml
├── hosts.yml
└── host_vars
└── server2.jenkins.local
└── specific_jenkins_plugins.yml
The hosts are declared in inventories/demo/hosts.yml
---
jenkins:
hosts:
server1.jenkins.local:
server2.jenkins.local:
As you can see, I declared two host as in your question and placed them in a jenkins group. So we can now declare variables for that group inside the inventories/demo/group_vars/jenkins directory where I placed the plugins.yml file
---
# List of plugins to install by default
_default_jenkins_plugins:
- name: plugin-a
version: "1.1"
- name: plugin-b
version: "2.2"
- name: plugin-x
version: "1.1"
# Customize plugin list with specific plugins for host if they exist. For this we
# transform the above default list and the specific one to dicts
# combine them and turn them back into a result list
_d_plug_dict: "{{ _default_jenkins_plugins
| items2dict(key_name='name', value_name='version') }}"
_s_plug_dict: "{{ _specific_jenkins_plugins | d([])
| items2dict(key_name='name', value_name='version') }}"
jenkins_plugins: "{{ _d_plug_dict | combine(_s_plug_dict)
| dict2items(key_name='name', value_name='version') }}"
Now we can give a list of specific plugins/versions for server2.jenkins.local. I placed those values in inventories/demo/host_vars/server2.jenkins.local/specific_jenkins_plugins.yml
---
_specific_jenkins_plugins:
# Override version for plugin-x
- name: plugin-x
version: "1.5"
# Install an additional plugin-z
- name: plugin-z
version: "9.9"
Then we have a simple check_plugins_versions.yml test playbook to verify that the above is doing the job as expected:
---
- hosts: jenkins
gather_facts: false
tasks:
- name: show computed plugins for each server
debug:
var: jenkins_plugins
Which gives when called with the relevant inventory
$ ansible-playbook -i inventories/demo/ check_plugins_versions.yml
PLAY [jenkins] *************************************************************************************************************************************************************************************************************************
TASK [show computed plugins for each server] *******************************************************************************************************************************************************************************************
ok: [server1.jenkins.local] => {
"jenkins_plugins": [
{
"name": "plugin-a",
"version": "1.1"
},
{
"name": "plugin-b",
"version": "2.2"
},
{
"name": "plugin-x",
"version": "1.1"
}
]
}
ok: [server2.jenkins.local] => {
"jenkins_plugins": [
{
"name": "plugin-a",
"version": "1.1"
},
{
"name": "plugin-b",
"version": "2.2"
},
{
"name": "plugin-x",
"version": "1.5"
},
{
"name": "plugin-z",
"version": "9.9"
}
]
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
server1.jenkins.local : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2.jenkins.local : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

Deleting multiple files and folders using Ansible

I need to delete files and folders using a ansible playbook. I pass the file/foler paths as a variable to an Ansible playbook from a Groovy script.
Variables are in a properties file named delete.properties. I stored file/foler paths seperatly in a variables so I can change the paths as I need in future.
delete.properties:
delete_files=/home/new-user/myfolder/dltfolder1 /home/new-user/myfolder/dltfolder2 /home/new-user/myfolder/dltfolder3
Groovy script:
stage("Read variable"){
steps{
script{
def propertifile = readFile(properti file path)
deleteParams = new Properties()
deleteParams.load(new StringReader(propertifile))
}
}
}
stage("Delete files/folders"){
steps{
script{
sh script: """cd ansible code path && \
export ANSIBLE_HOST_KEY_CHECKING=False && \
ansible-playbook delete.yml \
--extra-vars"dete_files=${deleteParams.delete_files}" --user user"""
}
}
}
Ansible playbook:
---
- name: delete files
hosts: localhost
tasks:
- name: delete files
file:
path: "{{ delete_files }}"
state: absent
As a result of above codes, only the first file path in delete_files (/home/new-user/myfolder/dltfolder1) variable in delete.properties file gets deleted.
I need to delete the other file/folder paths included in the delete_files variable too.
Put the path of the file into the extra vars. For example,
sh script: """cd ansible code path && \
export ANSIBLE_HOST_KEY_CHECKING=False && \
ansible-playbook delete.yml \
--extra-vars "dete_files=/tmp/delete.properties" --user user"""
Then, given the tree
shell> tree /tmp/test
/tmp/test
├── f1
├── f2
└── f3
, the file
shell> cat /tmp/delete.properties
delete_files=/tmp/test/f1 /tmp/test/f2 /tmp/test/f3
, and the playbook
shell> cat delete.yml
- hosts: localhost
vars:
delete_files: "{{ lookup('ini',
'delete_files',
file=dete_files,
type='properties') }}"
tasks:
- debug:
var: delete_files
- name: delete files
file:
path: "{{ item }}"
state: absent
loop: "{{ delete_files.split() }}"
gives, running in --check --diff mode
shell> ansible-playbook delete.yml --extra-vars "dete_files=/tmp/delete.properties" -CD
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
delete_files: /tmp/test/f1 /tmp/test/f2 /tmp/test/f3
TASK [delete files] **************************************************************************
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f1
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f1)
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f2
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f2)
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f3
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f3)
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
One solution would be to parse the properties file inside the Ansible playbook, with a ini lookup, if you are indeed acting on localhost, as you are showing it in your playbook:
- hosts: localhost
gather_facts: no
tasks:
- file:
path: "{{ item }}"
state: absent
loop: >-
{{
lookup(
'ini',
'delete_files type=properties file=delete.properties'
).split()
}}

Ansible validate docker-compose with env_file and send to host

I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...

jenkins ansible-plugin can't find ansible role

hope you guys are doing great .
i have a problem running a playbook using ansible plugin in jenkins
when i run the build it gives me this error :
ansible-demo] $ /usr/bin/ansible-playbook /var/lib/jenkins/workspace/ansible-demo/ansible-openshift.yaml -f 5
ERROR! the role 'ansible.kubernetes-modules' was not found in /var/lib/jenkins/workspace/ansible-demo/roles:/var/lib/jenkins/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/var/lib/jenkins/workspace/ansible-demo
The error appears to be in '/var/lib/jenkins/workspace/ansible-demo/ansible-openshift.yaml': line 6, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
FATAL: command execution failed
hudson.AbortException: Ansible playbook execution failed
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:262)
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:232)
at jenkins.tasks.SimpleBuildStep.perform(SimpleBuildStep.java:123)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:806)
at hudson.model.Build$BuildExecution.build(Build.java:198)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:514)
at hudson.model.Run.execute(Run.java:1888)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:99)
at hudson.model.Executor.run(Executor.java:431)
ERROR: Ansible playbook execution failed
Finished: FAILURE
here is my yaml file that i use to deploy in openshift cluster
---
- hosts: 127.0.0.1
become: yes
become_user: oassaghir
roles:
- role: ansible.kubernetes-modules
vars:
ansible_python_interpreter: /usr/bin/python3
tasks:
- name: Try to login to Okd cluster
k8s_auth:
host: https://127.0.0.1:8443
username: developer
password: ****
validate_certs: no
register: k8s_auth_result
- name: deploy hello-world pod
k8s:
state: present
apply: yes
namespace: myproject
host: https://127.0.0.1:8443
api_key: "{{ k8s_auth_result.k8s_auth.api_key }}"
validate_certs: no
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
labels:
name: hello-openshift
spec:
selector:
matchLabels:
app: hello-openshift
replicas: 1
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 300m
memory: 64Mi
limits:
cpu: 600m
memory: 128Mi
when i run the code in my machine it works but using jenkins no
in my mahcine :
[oassaghir#openshift ansible-demo]$ sudo ansible-playbook ansible-openshift.yaml
PLAY [127.0.0.1] **************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************
ok: [127.0.0.1]
TASK [ansible.kubernetes-modules : Install latest openshift client] ***********************************************************************
skipping: [127.0.0.1]
TASK [Try to login to Okd cluster] ********************************************************************************************************
ok: [127.0.0.1]
TASK [deploy hello-world pod] *************************************************************************************************************
ok: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
can someone help plz !

Gitlab integration of RabbitMQ as a service

I'm trying to have a Gitlab setup where I integrate different services because I have a nodejs app and I would like to do integration testings with services like RabbitMQ, Cassandra, etc.
Question + Description of the problem + Possible Solution
Does someone know how to : write the Gitlab configuration file (.gitlab-ci.yml) to integrate RabbitMQ as a service, where I define a configuration file to create specific virtualhosts, exchanges, queues and users ?
So a section in my .gitlab-ci.yml I defined a variable which should point to the rabbitmq.config file like specified in the official documentation (https://www.rabbitmq.com/configure.html#config-location) but this does not work.
...
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
...
File I need to point to in my Gitlab configuration : rabbitmq.conf
In this file I want to specify a file rabbitmq-definition.json containing my specific virtualhosts, exchanges, queues and users for RabbitMQ.
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "./rabbitmq-definition.json"}
]}
].
File containing my RabbitMQ configuration :rabbitmq-definition.json
{
"rabbit_version": "3.8.9",
"rabbitmq_version": "3.8.9",
"product_name": "RabbitMQ",
"product_version": "3.8.9",
"users": [
{
"name": "guest",
"password_hash": "9OhzGMQqiSCStw2uosywVW2mm95V/I6zLoeOIuVZZm8yFqAV",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
},
{
"name": "test",
"password_hash": "4LWHqT8/KZN8EHa1utXAknONOCjRTZKNoUGdcP3PfG0ljM7L",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "management"
}
],
"vhosts": [
{
"name": "my_virtualhost"
},
{
"name": "/"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "guest",
"vhost": "my_virtualhost",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "test",
"vhost": "my_virtualhost",
"configure": "^(my).*",
"write": "^(my).*",
"read": "^(my).*"
}
],
"topic_permissions": [],
"parameters": [],
"policies": [],
"queues": [
{
"name": "my_queue",
"vhost": "my_virtualhost",
"durable": true,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [
{
"name": "my_exchange",
"vhost": "my_virtualhost",
"type": "topic",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
},
{
"name": "my_exchange",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "my_exchange",
"vhost": "my_virtualhost",
"destination": "my_queue",
"destination_type": "queue",
"routing_key": "test.test.*.1",
"arguments": {}
}
]
}
Existing Setup
Existing file .gitlab-ci.yml:
#image: node:latest
image: node:12
cache:
paths:
- node_modules/
stages:
- install
- test
- build
- deploy
- security
- leanix
variables:
NODE_ENV: "CI"
ENO_ENV: "CI"
LOG_FOLDER: "."
LOG_FILE: "queries.log"
.caching:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
before_script:
- npm ci --cache .npm --prefer-offline --no-audit
#install_dependencies:
# stage: install
# script:
# - npm install --no-audit
# only:
# changes:
# - package-lock.json
# test:quality:
# stage: test
# allow_failure: true
# script:
# - npx eslint --format table .
# test:unit:
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
# test_node14:unit:
# image: node:14
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
# RABBITMQ_DEFAULT_USER: guest
# RABBITMQ_DEFAULT_PASS: guest
# RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
# AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
dependency_scan:
stage: security
allow_failure: false
script:
- npm audit --audit-level=moderate
include:
- template: Security/Secret-Detection.gitlab-ci.yml
- template: Security/SAST.gitlab-ci.yml
secret_detection:
stage: security
before_script: []
secret_detection_default_branch:
stage: security
before_script: []
nodejs-scan-sast:
stage: security
before_script: []
eslint-sast:
stage: security
before_script: []
leanix_sync:
stage: leanix
variables:
ENV: "development"
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
ENV: "development"
- if: '$CI_COMMIT_BRANCH == "test"'
variables:
ENV: "uat"
- if: '$CI_COMMIT_BRANCH == "master"'
variables:
ENV: "production"
before_script:
- apt update && apt -y install jq
script:
- VERSION=$(cat package.json | jq -r .version)
- npm run dependencies_check
- echo "Update LeanIx Factsheet "
...
allow_failure: true
This is my .env_CI file :
CASSANDRA_CONTACTPOINTS = localhost
CASSANDRA_KEYSPACE = pfm
CASSANDRA_USER = "cassandra"
CASSANDRA_PASS = "cassandra"
RABBITMQ_HOSTS=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_VHOST=my_virtualhost
RABBITMQ_USER=guest
RABBITMQ_PASS=guest
RABBITMQ_PROTOCOL=amqp
PORT = 8091
Logs of a run after a commit on the node-api project :
Running with gitlab-runner 13.12.0 (7a6612da)
on Enocloud-Gitlab-Runner PstDVLop
Preparing the "docker" executor
00:37
Using Docker executor with image node:12 ...
Starting service rabbitmq:management ...
Pulling docker image rabbitmq:management ...
Using docker image sha256:737d67e8db8412d535086a8e0b56e6cf2a6097906e2933803c5447c7ff12f265 for rabbitmq:management with digest rabbitmq#sha256:b29faeb169f3488b3ccfee7ac889c1c804c7102be83cb439e24bddabe5e6bdfb ...
Waiting for services to be up and running...
*** WARNING: Service runner-pstdvlop-project-372-concurrent-0-b78aed36fb13c180-rabbitmq-0 probably didn't start properly.
Health check error:
Service container logs:
2021-08-05T15:39:02.476374200Z 2021-08-05 15:39:02.456089+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:02.476612801Z 2021-08-05 15:39:02.475702+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
...
2021-08-05T15:39:03.024092380Z 2021-08-05 15:39:03.023476+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-08-05T15:39:03.024287781Z 2021-08-05 15:39:03.023757+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-08-05T15:39:03.045901591Z 2021-08-05 15:39:03.045602+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-08-05T15:39:03.391624143Z 2021-08-05 15:39:03.391057+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-08-05T15:39:03.391785874Z 2021-08-05 15:39:03.391207+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/quorum/rabbit#635519274c80
2021-08-05T15:39:03.510825736Z 2021-08-05 15:39:03.510441+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-08-05T15:39:03.536493082Z 2021-08-05 15:39:03.536098+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2021-08-05T15:39:03.547541524Z 2021-08-05 15:39:03.546999+00:00 [info] <0.222.0> ra: starting system coordination
2021-08-05T15:39:03.547876996Z 2021-08-05 15:39:03.547058+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/coordination/rabbit#635519274c80
2021-08-05T15:39:03.551508520Z 2021-08-05 15:39:03.551130+00:00 [info] <0.272.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2021-08-05T15:39:03.552002433Z 2021-08-05 15:39:03.551447+00:00 [noti] <0.277.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2021-08-05T15:39:03.557022096Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0>
2021-08-05T15:39:03.557045886Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Starting RabbitMQ 3.9.1 on Erlang 24.0.5 [jit]
2021-08-05T15:39:03.557050686Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.557069166Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558119613Z
2021-08-05T15:39:03.558134063Z ## ## RabbitMQ 3.9.1
2021-08-05T15:39:03.558139043Z ## ##
2021-08-05T15:39:03.558142303Z ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.558145473Z ###### ##
2021-08-05T15:39:03.558201373Z ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558206473Z
2021-08-05T15:39:03.558210714Z Erlang: 24.0.5 [jit]
2021-08-05T15:39:03.558215324Z TLS Library: OpenSSL - OpenSSL 1.1.1k 25 Mar 2021
2021-08-05T15:39:03.558219824Z
2021-08-05T15:39:03.558223984Z Doc guides: https://rabbitmq.com/documentation.html
2021-08-05T15:39:03.558227934Z Support: https://rabbitmq.com/contact.html
2021-08-05T15:39:03.558232464Z Tutorials: https://rabbitmq.com/getstarted.html
2021-08-05T15:39:03.558236944Z Monitoring: https://rabbitmq.com/monitoring.html
2021-08-05T15:39:03.558241154Z
2021-08-05T15:39:03.558244394Z Logs: /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.558247324Z <stdout>
2021-08-05T15:39:03.558250464Z
2021-08-05T15:39:03.558253304Z Config file(s): /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.558256274Z
2021-08-05T15:39:03.558984369Z Starting broker...2021-08-05 15:39:03.558703+00:00 [info] <0.222.0>
2021-08-05T15:39:03.558996969Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> node : rabbit#635519274c80
2021-08-05T15:39:03.559000489Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> home dir : /var/lib/rabbitmq
2021-08-05T15:39:03.559003679Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> config file(s) : /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.559006959Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> cookie hash : 1iZSjTlqOt/PC9WvpuHVSg==
2021-08-05T15:39:03.559010669Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> log(s) : /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.559014249Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> : <stdout>
2021-08-05T15:39:03.559017899Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> database dir : /var/lib/rabbitmq/mnesia/rabbit#635519274c80
2021-08-05T15:39:03.893651319Z 2021-08-05 15:39:03.892900+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:09.081076751Z 2021-08-05 15:39:09.080611+00:00 [info] <0.659.0> * rabbitmq_management_agent
----
Pulling docker image node:12 ...
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256:... ...
Preparing environment 00:01
Running on runner-pstdvlop-project-372-concurrent-0 via gitlab-runner01...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/node-api/.git/
Checking out 4ce1ae1a as PM-1814...
Removing .npm/
Removing node_modules/
Skipping Git submodules setup
Restoring cache 00:03
Checking cache for default...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
WARNING: node_modules/.bin/depcheck: chmod node_modules/.bin/depcheck: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:20
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256: ...
$ npm ci --cache .npm --prefer-offline --no-audit
npm WARN prepare removing existing node_modules/ before installation
> node-cron#2.0.3 postinstall /builds/node-api/node_modules/node-cron
> opencollective-postinstall
> core-js#2.6.12 postinstall /builds/node-api/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
added 642 packages in 10.824s
$ npm run test_integration
> pfm-liveprice-api#0.1.3 test_integration /builds/node-api
> npx nyc mocha test/integration --exit --timeout 10000 --reporter mocha-junit-reporter
RABBITMQ_PROTOCOL : amqp RABBITMQ_USER : guest RABBITMQ_PASS : guest
config.js parseInt(RABBITMQ_PORT) : NaN
simple message
[x] Sent 'Hello World!'
this queue [object Object] exists
----------------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------------------------|---------|----------|---------|---------|-------------------
All files | 5.49 | 13.71 | 4.11 | 5.33 |
pfm-liveprice-api | 21.3 | 33.8 | 21.43 | 21 |
app.js | 0 | 0 | 0 | 0 | 1-146
config.js | 76.67 | 55.81 | 100 | 77.78 | 19-20,48,55,67-69
pfm-liveprice-api/routes | 0 | 0 | 0 | 0 |
index.js | 0 | 100 | 0 | 0 | 1-19
info.js | 0 | 100 | 0 | 0 | 1-15
liveprice.js | 0 | 0 | 0 | 0 | 1-162
status.js | 0 | 100 | 0 | 0 | 1-14
pfm-liveprice-api/services | 0 | 0 | 0 | 0 |
rabbitmq.js | 0 | 0 | 0 | 0 | 1-110
pfm-liveprice-api/utils | 0 | 0 | 0 | 0 |
buildBinding.js | 0 | 0 | 0 | 0 | 1-35
buildProducts.js | 0 | 0 | 0 | 0 | 1-70
store.js | 0 | 0 | 0 | 0 | 1-291
----------------------------|---------|----------|---------|---------|-------------------
=============================== Coverage summary ===============================
Statements : 5.49% ( 23/419 )
Branches : 13.71% ( 24/175 )
Functions : 4.11% ( 3/73 )
Lines : 5.33% ( 21/394 )
================================================================================
Saving cache for successful job
00:05
Creating cache default...
node_modules/: found 13259 matching files and directories
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: test-results.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Tried and does not work
Using variables where to define RabbitMQ vars is deprecated and a .config is required
If I try to use the following vars in my .gitlab-ci.yml :
...
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
...
I get the following outout :
...
Starting service rabbitmq:latest ...
Pulling docker image rabbitmq:latest ...
Using docker image sha256:1c609d1740383796a30facdb06e52905e969f599927c1a537c10e4bcc6990193 for rabbitmq:latest with digest rabbitmq#sha256:d5056e576d8767c0faffcb17b5604a4351edacb8f83045e084882cabd384d216 ...
Waiting for services to be up and running...
*** WARNING: Service runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 probably didn't start properly.
Health check error:
start service container: Error response from daemon: Cannot link to a non running container: /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 AS /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0-wait-for-service/service (docker.go:1156:0s)
Service container logs:
2021-08-05T13:14:33.024761664Z error: RABBITMQ_DEFAULT_PASS is set but deprecated
2021-08-05T13:14:33.024797191Z error: RABBITMQ_DEFAULT_USER is set but deprecated
2021-08-05T13:14:33.024802924Z error: deprecated environment variables detected
2021-08-05T13:14:33.024806771Z
2021-08-05T13:14:33.024810742Z Please use a configuration file instead; visit https://www.rabbitmq.com/configure.html to learn more
2021-08-05T13:14:33.024844321Z
...
because on the official Docker documentation (https://hub.docker.com/_/rabbitmq) it is stated that :
WARNING: As of RabbitMQ 3.9, all of the docker-specific variables listed below are deprecated and no longer used. Please use a configuration file instead; visit rabbitmq.com/configure to learn more about the configuration file. For a starting point, the 3.8 images will print out the config file it generated from supplied environment variables.
# Unavailable in 3.9 and up
RABBITMQ_DEFAULT_PASS
RABBITMQ_DEFAULT_PASS_FILE
RABBITMQ_DEFAULT_USER
RABBITMQ_DEFAULT_USER_FILE
RABBITMQ_DEFAULT_VHOST
RABBITMQ_ERLANG_COOKIE
...

Error connecting: Error while fetching server API version: Ansible

I'm very new at Ansible. I've run following ansible PlayBook and found those errors:
---
- hosts: webservers
remote_user: linx
become: yes
become_method: sudo
tasks:
- name: install docker-py
pip: name=docker-py
- name: Build Docker image from Dockerfile
docker_image:
name: web
path: docker
state: build
- name: Running the container
docker_container:
image: web:latest
path: docker
state: running
- name: Check if container is running
shell: docker ps
Error message:
FAILED! => {"changed": false, "msg": "Error connecting: Error while
fetching server API version: ('Connection aborted.', error(2, 'No such
file or directory'))"}
And here is my folder structure:
.
├── ansible.cfg
├── docker
│   └── Dockerfile
├── hosts
├── main.retry
├── main.yml
I'm confused that docker folder is already inside my local but don't know why I encountered those error message.
I've found solution is Docker daemon is not working after Docker was installed by Ansible. It's required to add following command in my play board.
---
- hosts: webservers
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: install docker
yum: name=docker
**- name: Ensure service is enabled
command: service docker restart***
- name: copying file to remote
copy:
src: ./docker
dest: /home/ec2-user/docker
- name: Build Docker image from Dockerfile
docker_image:
name: web
path: /home/ec2-user/docker
state: build
- name: Running the container
docker_container:
image: web:latest
name: web
- name: Check if container is running
shell: docker ps
I have faced the same problem. I am trying to perform a docker login and get the same weird error. In my case, the ansible user does not have the necessary docker credentials. The solution, in that case, is to switch to a user with docker credentials:
- name: docker login
hosts: my_server
become: yes
become_user: docker_user
tasks:
- docker_login:
registry: myregistry.com
username: myusername
password: mysecret

Resources