I have a CircleCI job with the following structure.
jobs:
test:
steps:
- checkout
- run #1
...<<install dependencies>>
- run #2
...<<execute server-side test>>
- run #3
...<<execute frontend test 1>>
- run #4
...<<execute frontend test 2>>
I want to execute step #1 first, and then steps #2-4 in parallel.
#1, #2, #3, and #4 take around ~4 min., ~1 min., ~1 min., and ~1 min., respectively.
I tried to split the steps to different jobs and use workspaces to pass the installed artifacts from #1 to #2-4. However, because of the large size of the artifacts, it took around ~2 min. to persist & attach workspace, so the advantage of splitting jobs was cancelled out.
Is there a smart way to run #2-4 in parallel without significant overhead?
If you want to run the commands in parallel, you need to move these commands into a new job, otherwise, CircleCI will follow the structure of your step, running the commands only when the last one is finished. Let me give you an example. I created a basic configuration with 4 jobs.
npm install
test1 (that will run at the same time as
test2) but only when the npm install finish
test2 (that will
run at the same time as test1) but only when the npm install
finish
deploy (that will only run after the 2 tests are done)
Basically, you need to split the commands between jobs and set a dependency from what you want.
See my config file:
version: 2.1
jobs:
install_deps:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run: echo "running npm install"
- run: npm install
- persist_to_workspace:
root: .
paths:
- '*'
test1:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the first test and also will run the test2 in parallel"
- run: npm test
test2:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the second test in parallel with the first test1"
- run: npm test
deploy:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the deploy job only when the test1 and test2 are finished"
- run: npm run build
# Orchestrate our job run sequence
workflows:
test_and_deploy:
jobs:
- install_deps
- test1:
requires:
- install_deps
- test2:
requires:
- install_deps
- deploy:
requires:
- test1
- test2
Now see the logic above, the install_dep will run with no dependency, but the test1 and the test2 will not run until the install_dep is finished.
Also, the deploy will not run until both tests are finished.
I've run this config, in the first image we can see that the other jobs are waiting for the first one to finish, in the second image we can see both tests are running in parallel and the deploy job is waiting for them to finishes. In the third image, we can see that the deploy job is running.
Related
i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist
I am using GitHub action to do some automation test and my application was developed in docker.
name: Docker Image CI
on:
push:
branches: [ master]
pull_request:
branches: [ master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker-compose build
- name: up mysql and apache container runs
run: docker-compose up -d
- name: install dependencies
run: docker exec myapp php composer.phar install
- name: show running container
run: docker ps
- name: run unit test
run: docker exec myapp ./vendor/bin/phpunit
At the step 'show running container', I can see that all the containers are running but for the MySQL, the status is (health: starting). Thus, my unit test cases all failed as it requires a connection to MySQL. So may I know if there is a way to start the unit case only when the MySQL container's status is healthy?
I would like to offer a solution, not a smart one but it requires minimum configuration and ready to go, just use the GitHub Action for Sleeping.
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Sleep for 30 seconds
uses: jakejarvis/wait-action#master
with:
time: '30s'
Assumption: your Mysql server will be up and running in 30s.
You can use thegabriele97/dockercompose-health-action
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Check services healthiness
uses: thegabriele97/dockercompose-health-action#main
with:
timeout: '60'
workdir: 'src'
As the documentation states:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
If you can't implement this at the moment, you can write some simple script that will indefinitely try a simple statement on the database. Once the script succeed you exit loop and start your unit tests. Check the documentation link I've provided, you'll find there an example of such script (wait-for-it.sh).
My approach was to use:
in my docker-compose.yml file:
healthcheck:
test: curl --fail http://localhost/ping || exit 1
interval: 2s
retries: 10
start_period: 10s
timeout: 10s
in my Github Actions workflow:
- name: Wait for healthchecks
run: timeout 60s sh -c 'until docker ps | grep <CONTAINER_NAME> | grep -q healthy; do echo "Waiting for container to be healthy..."; sleep 2; done'
As stated in documentation:
On Linux and macOS runners, use the sleep command:
- name: Sleep for 30 seconds
run: sleep 30s
shell: bash
On Windows runners, use the Start-Sleep command:
- name: Sleep for 30 seconds
run: Start-Sleep -s 30
shell: powershell
i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"
You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).
I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.
I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.
Digital ocean kill docker process, why?
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- .yarn
- node_modules/
- client/semantic/
before_script:
- yarn config set cache-folder .yarn
- yarn install
stages:
- build
Compile:
stage: build
script:
- npm run build:prod
artifacts:
paths:
- dist/
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- dist/
After 2 minutes 34 seconds ..
[4/4] Building fresh packages...
Killed
ERROR: Job failed: exit code 1
Why was killed?
I have a local environment, with same linux distribution+docker+gitlab runner. And locally works.
Usually the Killed message comes from the Linux OOM (Out Of Memory) killer. I'm betting if you check dmesg output you will find a OOM message about the process being killed because not enough memory was available. In this scenario, you'll need to give your system some more memory (or, in Digital Ocean case, there may not be any swap space and you could start by creating some).