If I have a .circleci/config.yml file like so:
version: 2
jobs:
build-node8:
docker:
- image: oresoftware/lmx-circleci:8
steps:
- checkout
- run: ./scripts/circleci/run.sh
build-node9:
docker:
- image: oresoftware/lmx-circleci:9
steps:
- checkout
- run: ./scripts/circleci/run.sh
build-node10:
docker:
- image: oresoftware/lmx-circleci:10
steps:
- checkout
- run: ./scripts/circleci/run.sh
build-node11:
docker:
- image: oresoftware/lmx-circleci:11
steps:
- checkout
- run: ./scripts/circleci/run.sh
build-node12:
docker:
- image: oresoftware/lmx-circleci:12
steps:
- checkout
- run: ./scripts/circleci/run.sh
there are 5 jobs listed here, but when the builds start, only 4 jobs run in parallel. Is there a way to run more than 4 jobs in parallel is there a hard limit there?
My guess is that under workflows, I can change the parallelism level?
workflows:
version: 2
build_nodejs:
parallelism: 5
jobs:
- build-node8
- build-node9
- build-node10
- build-node11
- build-node12
perhaps this requires a paid account tho?
Short Answer:
CircleCi lets you run as many jobs in parallel as you want, as long as your payment plan has enough containers to serve each job.
I suspect that your plan only has 4 containers. You can check to see how many containers you have in the settings tab in CircleCi
In the below example I have a total of 2 containers available: 1 paid + 1 free. So right now at max I can only run 2 jobs in parallel. I can pay an extra $50 per month per container to add additional containers thoughs.
Additional Details:
This article gives a great overview of how to configure circle ci jobs to run in parallel (and it actually has an example where 5 jobs run in parallel). https://circleci.com/blog/decrease-your-build-times-by-running-jobs-in-parallel-with-workflows/
Regarding the config file code snippet you pasted in your question - It looks fine (Tho you don't need the parallelism: 5 flag, since circle will use all available plan capacity automatically)
Can you please check to see how many containers are in your plan and then report back?
FYI - CircleCi container and concurrent job plan info:
https://circleci.com/pricing/
Related
I am rewriting my CircleCI config. Everything was put in only one job and everything was working well, but for some good reasons I want more structure.
Now I have two jobs build and test, and I want the second job to reuse the machine exactly where the build job stopped.
I will later have a third and four job.
My desire would be a line that says I want to reuse the previous machine/executor, built-in from CircleCI.
Other options are Workspaces that save data on CircleCI machine, or building and deploying my own docker that represents the machine after the build job
What is the easiest way to achieve what I want to do ?
Currently, I have basically in my yaml:
jobs:
build:
docker:
- image: cypress/base:14.16.0
steps:
- checkout
- node/install:
install-yarn: true
node-version: '16.13'
- other-long-commands
test:
# NOT GOOD: need an executor
steps:
- run:
name: 'test'
command: 'npx cypress run'
environment:
TEST_SUITE: SMOKE
workflows:
build-and-test:
jobs:
- build
- smoke:
requires:
- build
Can't be done. Workspaces is the solution instead.
My follow up would be, why do you need two jobs? Depending on your use case, pulling steps out into reusable commands might help, or even an orb.
I'm looking for a way to run a "cleanup" job/pipeline/etc when a GitLab merge request is closed (either merged or not).
The issue is this - we create a feature deployment on our cluster anytime a merge request is opened. Currently, I have no mechanism of detecting when an MR is closed. Over time these old 'feature deployments' accumulate on the cluster.
I could write a manual cleanup script (look at all open features, remove no-longer-existing-ones) from the cluster but that is going to be a bit hairy and error-prone. Was hoping GitLab has a method to use the really easy/nice pipeline+jobs features for this type of cleanup
We use GitLab environments for review apps. Environments can be automatically stopped after x weeks with environment.auto_stop_in. This has the advantage that, also when MR's stay open for months as they might be forgotten, the data will be cleaned up after x weeks (we use 2 weeks).
In the script, you can do whatever you need to clean up things. Like in our case, a helm uninstall.
gitlab-ci.yaml
deploy_review:
stage: deploy
script: "./deploy_review.sh"
environment:
auto_stop_in: 2 weeks
name: review/$CI_COMMIT_REF_SLUG
on_stop: stop_review_app
deployment_tier: development
resource_group: deploy-review-$CI_COMMIT_REF_SLUG
stop_review_app:
stage: after_deploy
when: manual
only:
- branches
variables:
GIT_STRATEGY: none
script: "./stop_review_app.sh"
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
I use CircleCI and the pipeline is as follows:
build
test
build app & nginx Docker images and push them to a GitLab registry
deploy Docker stack to the development server (currently the Swarm manager)
I just pushed my develop branch to my repository and faced a "Symfony4 new Controller page" on the development server after a successful message from CircleCI.
I logged via SSH in it and executed (with output for the application service):
docker stack ps my-development-stack --format "{{.Name}} {{.Image}} {{.CurrentState}}"
my-stack_app.1 gitlab-image:latest-develop Running 33 minutes ago
On my GitLab repository's registry, the application image has been "Last Updated" 41 minutes ago. The service's image has apparently been refreshed before with the last version.
Is it a common issue/error ?
How could (or should) I fix this timing issue ?
Can CircleCI help about this ?
Perhaps it is best ( though not ideal ) to introduce a delay between build and deploy , you can refer to this example here CircelCI Delay Between Jobs
I found a workaround using a CircleCI scheduled workflow triggered by a CRON. I scheduled a nightly build workflow which will run every day at midnight.
Sample of my config.yml file
# Beginning of the config.yml
# ...
workflows:
version: 2
# Push workflow
# ...
# Nightly build workflow
nightly-dev-deploy:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- build
- test:
requires:
- build
- deploy-dev:
requires:
- test
Read more about scheduled workflow with a nightly build example in the CircleCI official documentation
Looks more like a workaround to me. I'd be glad to hear how do you avoid this issue, which could lead to a better answer to the question.
Is it possible to have another job run in the context of another job? I have some jobs that have some steps in common, and I don't want to repeat these steps in the different jobs.
push-production-image:
docker:
- image: google/cloud-sdk:latest
working_directory: ~/app
steps:
- setup-gcp-docker
- run: docker push [image]
No you cannot however YAML itself has a way to solve this problem with what is called YAML Anchors and Aliases.
Here's a blog post I wrote on how to do specifically this: https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/
I'm iterating on adding database migrations to a project. For the first step, I've made a repository that runs migrations. Now I need to make it so these migrations run against the stage/prod environment. I do not want this to happen on every commit. Does circle ci provide a way to have a button that I can click on to run a job?
I think ideally I'd have 2 buttons. One for running migrations on stage, one for running them on prod. Is this possible?
There is a manual approval process for workflows.
https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval
workflows:
version: 2
build-test-and-approval-deploy:
jobs:
- build
- test1:
requires:
- build
- test2:
requires:
- test1
- hold:
type: approval
requires:
- test2
- deploy:
requires:
- hold
It's pretty limited. You can't use it to start a build.