I have three stage in gitlab-ci.yaml. I want to get all pipeline execution duration on the stage3.
Here is an example.
Stage1: execution duration is 10s.
Stage2: execution duration is 5s.
Stage3: get all pipeline duration 15s or more because the Stage1 Stage2 total execution duration is 15s.
stages:
- stage1
- stage2
- stage3
stage1:
stage: stage1
script:
- echo "Do something on stage1"
- sleep 10
stage2:
stage: stage2
script:
- echo "Do something on stage2"
- sleep 5
stage3:
stage: stage3
script:
- echo "Finally get stage1 + stage2 execution duration... 15s or more..."
If I run above script, I can see execution duration time on the Gitlab Pipeline GUI.
On Jenkins pipeline I can get execution duration with currentBuild.duration parameter but I can't find similar parameter on Gitlab CI.
How can I do to get pipeline execution duration on Gitlab CI ?
Related
I have a CircleCI job with the following structure.
jobs:
test:
steps:
- checkout
- run #1
...<<install dependencies>>
- run #2
...<<execute server-side test>>
- run #3
...<<execute frontend test 1>>
- run #4
...<<execute frontend test 2>>
I want to execute step #1 first, and then steps #2-4 in parallel.
#1, #2, #3, and #4 take around ~4 min., ~1 min., ~1 min., and ~1 min., respectively.
I tried to split the steps to different jobs and use workspaces to pass the installed artifacts from #1 to #2-4. However, because of the large size of the artifacts, it took around ~2 min. to persist & attach workspace, so the advantage of splitting jobs was cancelled out.
Is there a smart way to run #2-4 in parallel without significant overhead?
If you want to run the commands in parallel, you need to move these commands into a new job, otherwise, CircleCI will follow the structure of your step, running the commands only when the last one is finished. Let me give you an example. I created a basic configuration with 4 jobs.
npm install
test1 (that will run at the same time as
test2) but only when the npm install finish
test2 (that will
run at the same time as test1) but only when the npm install
finish
deploy (that will only run after the 2 tests are done)
Basically, you need to split the commands between jobs and set a dependency from what you want.
See my config file:
version: 2.1
jobs:
install_deps:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run: echo "running npm install"
- run: npm install
- persist_to_workspace:
root: .
paths:
- '*'
test1:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the first test and also will run the test2 in parallel"
- run: npm test
test2:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the second test in parallel with the first test1"
- run: npm test
deploy:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the deploy job only when the test1 and test2 are finished"
- run: npm run build
# Orchestrate our job run sequence
workflows:
test_and_deploy:
jobs:
- install_deps
- test1:
requires:
- install_deps
- test2:
requires:
- install_deps
- deploy:
requires:
- test1
- test2
Now see the logic above, the install_dep will run with no dependency, but the test1 and the test2 will not run until the install_dep is finished.
Also, the deploy will not run until both tests are finished.
I've run this config, in the first image we can see that the other jobs are waiting for the first one to finish, in the second image we can see both tests are running in parallel and the deploy job is waiting for them to finishes. In the third image, we can see that the deploy job is running.
I have a JenkinsFile where i have a postgresql container running. But i need to write a logic which will wait for the docker container to timout/exit.
Currently, i have something like this
psql_container=sh(script: "docker inspect psqlcont --format='{{.State.ExitCode}}'", returnStdout: true).trim()
sh "sleep 1000"
if (psql_container != '0'){
error "psql failed ..."
}else{
echo "psql starts"
Instead of sleep, i need to write a condition where the container will quit/timeout in 1000 seconds.
The docker wait command will wait (indefinitely) for a container to complete. The Jenkins pipeline timeout step will run some block but abort after a deadline. You can combine these to wait for a container to finish, or kill it if it takes too long:
try {
timeout(1000, unit: SECONDS) {
sh "docker wait psqlcont"
}
} catch(e) {
sh "docker stop psqlcont"
sh "docker wait psqlcont" // <= 10s, container is guaranteed to be stopped
}
sh "docker rm psqlcont"
You can use the timeout command, instead of sleep,
timeout [OPTIONS] DURATION COMMAND [ARG]…
Copy
The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:
s - seconds (default)
m - minutes
h - hours
d - days
You can send a command when the timeout time has been reached also,
For example, to send SIGKILL to the ping command after one minute, you would use:
sudo timeout -s SIGKILL ping 8.8.8.8
i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"
You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).
I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.
I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.
I have a stage test and production. I would like to manually confirm the deployment to production. Is there way to achieve this?
You can make use of Conditional Deployments. This allows you to specify whether you push to production or test.
Combine it with e.g. a check-live-deployment.sh-script and differentiate between branches and/or tagged commits.
For example:
#!/bin/bash
set -e
contains() {
if [[ $TRAVIS_TAG = *"-live"* ]]
then
#"-live" is in $TRAVIS_TAG
echo "true"
else
#"-live" is not in $TRAVIS_TAG
echo "false"
fi
}
echo "============== CHECKING IF DEPLOYMENT CONDITION IS MET =============="
export LIVE=$(contains)
and .travis.yml for a dev/staging/live-deployment to Cloud Foundry:
sudo: false
language: node_js
node_js:
- '8.9.4'
branches:
only:
- master
- "/v*/"
script:
- printenv
before_install:
- chmod +x -R ci
install:
- source ci/check_live_deployment.sh
- ./ci/check_live_deployment.sh
deploy:
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_DEV CF_MANIFEST=manifest-dev.yml ci/deploy_to_cf.sh
on:
tags: false
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_STAGING CF_MANIFEST=manifest-staging.yml ci/deploy_to_cf.sh
on:
tags: true
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_LIVE CF_MANIFEST=manifest-live.yml ci/deploy_to_cf.sh
on:
tags: true
condition: $LIVE = true
This example pushes to dev if branch is master && no tag is present, staging if its a tagged commit, and staging+live if it is a tagged commit on master (a release) and the deployment-condition is met.
Granted:
Maybe not the prettiest solution, but it definitely works. And this is not Travis waiting for you to manually confirm live-deployment (which would kind of ridicule the whole automated deployment principle imo), but it is a way to guarantee, that you have to manually trigger the pipeline in a specific way.