Every time Sonarcloud scanner starts, it download the image. It slowing down whole pipeline and generate additional cost.
Poor documentation don't have any information about additional cache https://bitbucket.org/sonarsource/sonarcloud-scan/src/master/
1.1.0: Pulling from sonarsource/sonarcloud-scan
27833a3ba0a5: Pulling fs layer
16d944e3d00d: Pulling fs layer
6aaf465b8930: Pulling fs layer
0684138f4cb6: Pulling fs layer
...
646f14b7521f: Pull complete
94dd58113625: Pull complete
41b91f2908b5: Pull complete
Try to use cache for docker (Docker is used internally in sonarcloud-scan pipe).
After enabling cache it runs faster by 1 minute
- step: &sca
image: atlassian/default-image:2 #quickest image
name: SonarCube SCA
caches:
- docker
script:
- pipe: sonarsource/sonarcloud-scan:1.1.0
- pipe: sonarsource/sonarcloud-quality-gate:0.1.3
services:
- docker
More info: https://gist.github.com/GetoXs/e2b323b048aad88c12a10aceba3cc6cd
Related
Our project uses a multi-stage CI setup where the first stage checks for modification of files like package-lock.json and Gemfile.lock, compiles all these dependencies and then pushes them to the Gitlab container registry.
Using --cache-from in Docker build based on the current mainline branch, this is quite fast and the Docker layering mechanism helps to prevent repetition of steps.
Subsequent stages and jobs then use the Docker image pushed in the first stage as their image:.
Abbreviated configuration for readability:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: test
script:
# do stuff in the context of the image build in the first stage
Unfortunately, when working on longer-running feature branches, we now have a situation where it looks like the image in the second step is sometimes outdated and not pulling the latest version from the registry before starting the job, which makes subsequent jobs complain about missing dependencies.
Is there anything I can do to force it to always pull the latest image for each job?
As already written in the comments, i would not use the $CI_COMMIT_REF_SLUG for tagging. Simply because it is not guaranteed that all pipelines will run in the same order, and this alone can create issues. The same one you are currently experiencing.
I recommend to use $CI_COMMIT_SHA as it is bound to the pipeline. I would also rely on previous builds for caching and i will shortly outline my approach here
stages:
- create_builder_image
- test
- deploy
Create Builder Image:
stage: create_builder_image
script:
- (docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG && export DOCKER_CACHE_TAG=$CI_COMMIT_REF_SLUG) || (docker pull $GITLAB_IMAGE/builder:latest && export DOCKER_CACHE_TAG=latest) || true
- docker build --cache-from $GITLAB_IMAGE/builder:$DOCKER_CACHE_TAG ...
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
stage: test
script:
# do stuff in the context of the image build in the first stage
Push image: # pushing the image for the current branch ref, as i know it is a working image and it can than be used for caching by others.
image: docker:20
stage: deploy
variables:
GIT_STRATEGY: none
stage: push
script:
- docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
- docker tag $GITLAB_IMAGE/builder:$CI_COMMIT_SHA $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
i know it might generate additional build steps, but this way, you can ensure that you will always have the image which belong to the pipeline. You still can use caching and layering from docker, and beneficiary, the image will not be pushed currently if the tests are failing.
Furthermore you can also create an step before creating the build image, where you can figure out, if you do need a new image at all.
What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.
I want to build a singularity image in GitLab CI. Unfortunately, the official containers fail with:
Running with gitlab-runner 13.5.0 (ece86343) on gitlab-ci d6913e69
Preparing the "docker" executor
Using Docker executor with image quay.io/singularity/singularity:v3.7.0 ...
Pulling docker image quay.io/singularity/singularity:v3.7.0 ...
Using docker image sha256:46d3827bfb2f5088e2960dd7103986adf90f2e5b4cbea9eeb0b0eacfe10e3420 for quay.io/singularity/singularity:v3.7.0 with digest quay.io/singularity/singularity#sha256:def886335e36f47854c121be0ce0c70b2ff06d9381fe8b3d1894fee689615624 ...
Preparing environment
Running on runner-d6913e69-project-2906-concurrent-0 via <gitlab.url>...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in <repo-path>
Checking out 708cc829 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Error: unknown command "sh" for "singularity"
immediately at the beginning, when using a job like this:
build-singularity:
image: quay.io/singularity/singularity:v3.7.0
stage: singularity
script:
- build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
only:
changes:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
- reproduction/pipeline/semrepro-singularity/assets/mirrorlist
- .gitlab/ci/build-semrepo-singularity.yml
artifacts:
paths:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif
expire_in: 1 hour
interruptible: true
For me, it seems like GitLab is trying to use a shell that doesn't exist? How are they supposed to work? In the official example they're using a special version of the docker image called -gitlab, but that unfortunately isn't available anymore. Any ideas? I can't imagine it isn't possible to build singularity containers within CI? Thanks a lot in advance!
EDIT: According to #tsnowlan's answer, overriding the entrypoint fixes the above issue. However, now the build fails with:
singularity build semrepro-singularity.sif semrepro-singularity.def
INFO: Starting build...
INFO: Downloading library image
84.1MiB / 84.1MiB [========================================] 100 % 28.7 MiB/s 0s
ERROR: unpackSIF failed: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
FATAL: While performing build: packer failed to pack: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
Cleaning up file based variables
ERROR: Job failed: exit code 1
Any ideas?
You need to finagle it a bit to make it play nice with gitlab CI. The easiest way I found was to clobber the docker entrypoint and have script step be the full singularity build command. We're using this to build our singularity images with v3.6.4, but it should work with v3.7.0 as well.
e.g.,
build-singularity:
image:
name: quay.io/singularity/singularity:v3.7.0
entrypoint: [""]
stage: singularity
script:
- singularity build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
...
edit: the gitlab-runner used must also have privileged enabled. This is the default on the gitlab.com shared runners, but if using your own runners you'll need to make sure that is set in their config.
I want to build an Windows docker image on azure piplines. Pulling the base image is taking up to 20 minutes. How can i speed up the docker pull?
I want to use the azure hosted pipline
I cannot use the cached images on the agent
Example Script based on Pipline Resources:
trigger:
- '*'
resources:
containers:
- container: sdk
image: mcr.microsoft.com/dotnet/framework/sdk:4.8-20190611-windowsservercore-ltsc2019
- container: runtime
image: mcr.microsoft.com/dotnet/framework/runtime:4.8-20190611-windowsservercore-ltsc2019
jobs:
- job: pullSdk
pool:
vmImage: 'windows-2019'
container: sdk
- job: pullRuntime
pool:
vmImage: 'windows-2019'
container: runtime
If using a hosted agent, the only speedup method I can think of is to use the cached docker image of the hosted agent. This can save a lot of time.
For a faster build, the best practice is to setup a self-hosted agent .
In addition, you could add your feature request on our UserVoice site, which is our main forum for product suggestions. After suggest raised, you can vote and add your comments for this feedback. The product team would provide the updates if they view it.
I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER