I've managed to get what docker images have been deployed but it has to be written in groovy.
I have the following:
sh script: '''
export PATH=\"$PATH\":\"${WORKSPACE}\"
for docker-image in interface data keycloak artifactory ; do
DOCKERHOST=`echo ${DOCKERURL}/images-rancher/$docker-image | sed 's!^localhost://!!g'`
DOCKERVERSION=`docker image ls ${DOCKERHOST} --format '{{ json .Tag }}' | head -1`
echo "${DOCKERHOST} - ${DOCKERVERSION}"
done
'''
Changing it into groovy:
def image = [ "interface", "data" , "keycloak", "artifactory" ]
.
.
.
for docker-image in image
println docker-image
How would you put that in a groovy script?
Thanks
Here's how you can get most of the way to using Groovy instead of bash. The doRegexManipulation() function is left as an exercise for you to implement.
Note that the docker image ls sh step is still required, and cannot be translated to "pure" Groovy.
withEnv(["PATH=${env.PATH}:${env.WORKSPACE}"]) {
def images = [ "interface", "data" , "keycloak", "artifactory" ]
for (String docker_image : images) {
def DOCKERHOST = doRegexManipulation("${DOCKERURL}/images-rancher/$docker_image")
def DOCKERVERSION = sh(
script: """docker image ls '${DOCKERHOST}' --format '{{ json .Tag }}' | head -1""",
returnStdout: true,
)
echo "${DOCKERHOST} - ${DOCKERVERSION}"
}
}
If you wanted to, you can go one step further and replace the head -1 part with Groovy code, since that can be done in Groovy as well.
The withEnv step is documented here. It is used to set environment variables for a block of Groovy code, thereby making those environment variables available to any child processes spawned in the block of Groovy code.
Related
I have a Jenkins file as like below:
stage("Deploy artifact for k8s sync")
{
sh '''
ns_exists=`kubectl get ns | grep ${target_cluster}`
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${target_cluster}"
echo "Creating namespace ${consider_namespace} in the cluster ${target_cluster}"
kubectl apply "some yaml file"
else
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "serviceaccounts" ]
then
echo "Applying source serviceaccounts on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "secrets" ]
then
echo "Applying source secrets on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "configmaps" ]
then
echo "Applying source configmaps on target cluster ${target_cluster}"
kubectl apply -f ${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-configmaps.yaml
fi
However, when I run, it fails with the error like below:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy artefact for k8s sync) (hide)
[Pipeline] sh
+ kubectl get ns
+ grep test-central-eks
+ ns_exists=
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Wondering how to resolve it and why it should fail in first place?
As far as I can see you have a couple of things that make this to fail.
From your pipeline extract I can see that you are using variables that are not defined in the script, so I guess those are somehow environment variables or coming from previous step. In order to be able to use it, you will need string interpolation, in your case, a triple double quote https://groovy-lang.org/syntax.html#_triple_double_quoted_string
stage("Deploy artifact for k8s sync")
{
sh """
In the other hand, grep exit code is 1 when there is not match, in case you need to continue in the case that there is not match you can put a block command with a true condition
ns_exists=`kubectl get ns | {grep ${target_cluster} || true; }`
Finally, a good way to catch this problems is to replay the pipeline with debug in the shell blocks by adding set -x at the beginning of the sh sh block.
Please note as well that stderr is not printed, in case you need it you would need to redirect to the stdout and readed from returnStdout: true option of jenkins sh or a custom file where the stdout is redirected
I want to run a specific job in a pipeline , I thought assigning a tag for the job and then specifying this tag again in the post method will fulfill my needs .The problem is when I trigger using the api(post) , all the jobs in the pipeline are triggered event though only one of this tagged .
gitlab-ci.yml :
job1:
script:
- echo "helloworld!"
tags : [myTag]
job2:
script:
- echo "hello gitlab!"
the api call :
curl -X POST -F token="xxx" -F ref="myTag" https://gitlab.com/api/v4/projects/12345678/trigger/pipeline
add a variable to your trigger api call as shown here:
https://docs.gitlab.com/ee/ci/triggers/#making-use-of-trigger-variables
then use the only prperty
inside your gitlab.yml file
as shown here :
https://docs.gitlab.com/ee/ci/variables/#environment-variables-expressions
then only if the variable exists the job will be execute
for example
job1:
script: echo "HELLO"
only:
variables:
- $variables[API_CALL]=true
Probably changes in GitLab makes answers above not working.
The
only:
variables:
- $variables[....]
syntax trigger CI Lint.
For others that come here like me, here's how I trigger a specific job:
job1:
script:
- echo "HELLO for job1"
- "curl
--request POST
--form token=$CI_JOB_TOKEN
--form ref=master
--form variables[TRIGGER_JOB]=job2
https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
except:
- pipelines
job2:
script: echo "HELLO for job2"
only:
variables:
- $TRIGGER_JOB == "job2"
⚠️ Note the except - pipelines in job1, else, you go in infinite Child pipeline loop!
By using variables you can do:
Use this curl command to trigger the pipeline with a variable
curl --request POST --form token=${TOKEN} --form ref=master --form "variables[TRIGERRED_JOB]=job1" "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
Ofcourse you have to set the variable accordingly.
Define your jobs with the appropriate variable:
job1:
script: echo "HELLO for job1"
only:
variables:
- $variables[TRIGERRED_JOB] == "JOB1"
job2:
script: echo "HELLO for job2"
only:
variables:
- $variables[TRIGERRED_JOB] == "JOB2"
if you are running the curl from inside another/same job you can use ${CI_JOB_TOKEN} instead of $TOKEN and
https://docs.gitlab.com/ee/ci/triggers/#making-use-of-trigger-variables
I'm currently digging in Gitlab CI. I would like to add a way in my YAML files to tag my docker images with a version number composed in the following fashion : MajorVersion.Minorversion.AutoincrementedGlobalversionNumber
I would like to auto-increment the globally defined variable "AutoincrementedGlobalversionNumber" each time I deploy.
I have used CI_PIPELINE_IID however it keeps incrementing for each pipeline request, I need something to keep a version where I can keep track of and it should increment only when I pack and deploy.
variables:
CI_VERSION: "1.0.${CI_PIPELINE_IID}"
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" -t "$CI_REGISTRY_IMAGE:$CI_VERSION" ./postfix
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
You probably can't do this with the default GitLab CI variables, but there could be a workaround along the lines of (untested):
Get the registry ID with something like:
$ registry_id=$(curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/container_registry.json" | jq '.[].id')
Query said registry to get the name:
curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/registry/repository/$registry_id/tags?format=json" | jq
eg returns the following and you can grep the name for GlobalVersionNumber:
[
{
"name": "latest",
"location": "registry.gitlab.com/mwasilewski/helm:latest",
"revision": "85a403337a56e9e6409dfb8185bf9aa5c2135f9a437bd75da82d27471c71feb4",
"short_revision": "85a403337",
"total_size": 152246865,
"created_at": "2016-12-11T08:31:30.126+00:00",
"destroy_path": "/mwasilewski/helm/registry/repository/31074/tags/latest"
}
]
Continue with your Docker build and push, after incrementing the GlobalVersionNumber you get back.
NB: this assumes you are using GitLab's Container Registry
Resources:
https://gitlab.com/gitlab-org/gitlab-ce/issues/40826
I created an docker image that has few labels, here is my Dockerfile section on LABELS:
grep LABEL Dockerfile
LABEL "css1"="/var/www/css1"
LABEL "css2"="/var/www/css2"
LABEL "img"="/var/www/img"
LABEL "js"="/var/www/js"
Then:
docker image inspect --format='{{.Config.Labels}}' labels-test
map[css1:/var/www/css1 css2:/var/www/css2 img:/var/www/img js:/var/www/js]
I need to get for example all labels starting with css. This is as far as i was able to figure:
docker image inspect --format='{{range $k,$v:=.Config.Labels}}{{$k}}:{{$v}} {{end}}' labels-test
css1:/var/www/css1 css2:/var/www/css2 img:/var/www/img js:/var/www/js
Desired output would be:
css1:/var/www/css1 css2:/var/www/css2
The Go template functions are available in golang docco
eq can test if arg1 == arg2.
printf "%.3s" $k will give you the first 3 chars of a string.
docker image inspect \
--format='{{ range $k,$v:=.Config.Labels }}{{ if eq (printf "%.3s" $k) "css" }}{{ $k }}:{{ $v }} {{end}}{{end}}' \
IMAGE
You might want to look at the querying the Docker API images endpoint /images/IMAGE/json directly or processing the JSON output somewhere if you need to do any more advanced processing:
docker image inspect \
--format='{{json .Config.Labels}}' \
IMAGE
You can do something like
docker inspect --format='{{index (index (.Config.Labels)).css1 }}' labels-test
which shows for me
/var/www/css1
and also
docker inspect --format='{{index (index (.Config.Labels)).css2 }}' labels-test
which shows for me
/var/www/css2
See my previous answer on that subject
How to get ENV variable when doing Docker Inspect
Edit
The following gives exactly what you ask for
docker inspect --format='{{index (index (.Config.Labels)).css1 }} {{index (index (.Config.Labels)).css2 }} labels-test
as i get
/var/www/css1 /var/www/css2
How to escape double curly braces in Ansible 1.9.2?
For instance, how can I escape double curly braces in the following shell command?
- name: Test
shell: "docker inspect --format '{{ .NetworkSettings.IPAddress }}' instance1"
Whenever you have problems with conflicting characters in Ansible, a rule of thumb is to output them as a string in a Jinja expression.
So instead of {{ you would use {{ '{{' }}:
- debug: msg="docker inspect --format '{{ '{{' }} .NetworkSettings.IPAddress {{ '}}' }}' instance1"
Topic "Escaping" in the Jinja2 docs.
This:
- name: Test
shell: "docker inspect --format {% raw %}'{{ .NetworkSettings.IPAddress }}' {% endraw %} instance1"
Should work
Another way to do is using backslashes like \{\{ .NetworkSettings.IPAddress \}\}
Hope it helps
Tried on with ansible 2.1.1.0
{%raw%}...{%endraw%} block seems the clear way
- name: list container images and name date on the server
shell: docker ps --format {%raw%}"{{.Image}} {{.Names}}"{%endraw%}
Only need to escape leading '{{'
tasks:
- name: list container images and names
shell: docker ps --format "{{'{{'}}.Image}} {{'{{'}}.Names}}"
No harm to escap the tailing '}}', except more difficult to read.
tasks:
- name: list container images and names
shell: docker ps --format "{{'{{'}}.Image{{'}}'}} {{'{{'}}.Names{{'}}'}}"
Backslash '\' seems do not work
New in Ansible 2.0 is the ability to declare a value as unsafe with the !unsafe tag.
In your example you could do:
- name: Test
shell: !unsafe "docker inspect --format '{{ .NetworkSettings.IPAddress }}' instance1"
See the docs for details.
I have a similar issue: i need to post a JSON doc made from a jinja2 template containing some go templates variables (yes, i know :-P), such as
"NAME_TEMPLATE": %{{service_name}}.%{{stack_name}}.%{{environment_name}}
Trying to fence this part of the template between
{% raw %} ... {% endraw %}
didn't work because there is some sort of magic in ansible which will run the template and variable substition twice (i'm not sure about that, but it definitively looks like this)
You end up with "undefined variable service_name" when trying to use the template...
So i ended up using a combination of !unsafe and {% raw %} ... {% endraw %} to define a fact that's later used in the template.
- set_fact:
__rancher_init_root_domain: "{{ rancher_root_domain }}"
#!unsafe: try to trick ansible into not doing substitutions in that string, then use %raw% so the value won't substituted another time
__rancher_init_name_template: !unsafe "{%raw%}%{{service_name}}.%{{stack_name}}.%{{environment_name}}{%endraw%}"
- name: build a template for a project
set_fact:
__rancher_init_template_doc: "{{ lookup('template', 'templates/project_template.json.j2') }}"
the template contains this:
"ROOT_DOMAIN":"{{__rancher_init_root_domain}}",
"ROUTE53_ZONE_ID":"",
"NAME_TEMPLATE":"{{__rancher_init_name_template }}",
"HEALTH_CHECK":"10000",
and the output is ok:
"NAME_TEMPLATE": "%{{service_name}}.%{{stack_name}}.%{{environment_name}}",
Here's a shorter alternative to udondan's answer; surround the whole string with double brackets:
shell: "docker inspect --format {{ '{{ .NetworkSettings.IPAddress }}' }} instance1"
The solution by using raw has been already mentioned but the command in the answer before unfortunately didn't work for me.
Without ansible:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker_instance_name
With ansible:
- name: Get ip of db container
shell: "{% raw %}docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker_instance_name{% endraw %}"
register: db_ip_addr
- debug:
var: db_ip_addr.stdout
I managed to work around my issue using a small script:
#!/usr/bin/env bash
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "$1"
And the following Ansible play
- copy:
src: files/get_docker_ip.sh
dest: /usr/local/bin/get_docker_ip.sh
owner: root
group: root
mode: 0770
- shell: "/usr/local/bin/get_docker_ip.sh {{ SWIFT_ACCOUNT_HOSTNAME }}"
register: swift_account_info
Nevertheless, it's very surprising that Ansible doesn't allow escaping double curly braces!
I was unable to get #Ben's answer to work (shell: !unsafe ...)
What follows here is a complete (and working!) answer to the OP's question, updated for Ansible >2.0
---
# file: play.yml
- hosts: localhost
connection: local
gather_facts: no
vars:
# regarding !unsafe, please see:
# https://docs.ansible.com/ansible/latest/user_guide/playbooks_advanced_syntax.html
#
- NetworkSettings_IPAddress: !unsafe "{{.NetworkSettings.IPAddress}}"
tasks:
- shell: "docker inspect --format '{{NetworkSettings_IPAddress}}' instance1"
register: out
- debug: var="{{item}}"
with_items:
- out.cmd
- out.stdout
outputs: ([WARNINGS] removed)
# ansible-playbook play.yml
PLAY [localhost] ***************************************************************
TASK [shell] *******************************************************************
changed: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => (item=out.cmd) => {
"item": "out.cmd",
"out.cmd": "docker inspect --format '{{.NetworkSettings.IPAddress}}' instance1"
}
ok: [localhost] => (item=out.stdout) => {
"item": "out.stdout",
"out.stdout": "172.17.0.2"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
# ansible --version | head -1
ansible 2.6.1
Here is a mostly clean and Ansible native workaround not depending on docker --inspect with curly braces. We assume we have just referenced one container with the Ansible docker module before:
- name: query IP of client container
shell: "docker exec {{ docker_containers[0].Id }} hostname -I"
register: _container_query
- name: get IP of query result
set_fact:
_container_ip: "{{ _container_query.stdout | regex_replace('\\s','') }}"
You now have the IP of the Docker container in the Variable _container_ip. I also published this workaround on my article The Marriage of Ansible with Docker.
[Update 2015-11-03] Removed whitespaces of the stdout of the container query.
[Update 2015-11-04] BTW, there were two pull requests in the official Ansible repository, that would made this workaround needless by recovering the facts returned by the Docker module. So you could acces the IP of a docker container via docker_containers[0].NetworkSettings.IPAddress. So please vote for those pull requests:
fixed broken facts #1457
docker module: fix regressions introduced by f38186c and 80aca4b #2093