I want to build a singularity image in GitLab CI. Unfortunately, the official containers fail with:
Running with gitlab-runner 13.5.0 (ece86343) on gitlab-ci d6913e69
Preparing the "docker" executor
Using Docker executor with image quay.io/singularity/singularity:v3.7.0 ...
Pulling docker image quay.io/singularity/singularity:v3.7.0 ...
Using docker image sha256:46d3827bfb2f5088e2960dd7103986adf90f2e5b4cbea9eeb0b0eacfe10e3420 for quay.io/singularity/singularity:v3.7.0 with digest quay.io/singularity/singularity#sha256:def886335e36f47854c121be0ce0c70b2ff06d9381fe8b3d1894fee689615624 ...
Preparing environment
Running on runner-d6913e69-project-2906-concurrent-0 via <gitlab.url>...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in <repo-path>
Checking out 708cc829 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Error: unknown command "sh" for "singularity"
immediately at the beginning, when using a job like this:
build-singularity:
image: quay.io/singularity/singularity:v3.7.0
stage: singularity
script:
- build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
only:
changes:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
- reproduction/pipeline/semrepro-singularity/assets/mirrorlist
- .gitlab/ci/build-semrepo-singularity.yml
artifacts:
paths:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif
expire_in: 1 hour
interruptible: true
For me, it seems like GitLab is trying to use a shell that doesn't exist? How are they supposed to work? In the official example they're using a special version of the docker image called -gitlab, but that unfortunately isn't available anymore. Any ideas? I can't imagine it isn't possible to build singularity containers within CI? Thanks a lot in advance!
EDIT: According to #tsnowlan's answer, overriding the entrypoint fixes the above issue. However, now the build fails with:
singularity build semrepro-singularity.sif semrepro-singularity.def
INFO: Starting build...
INFO: Downloading library image
84.1MiB / 84.1MiB [========================================] 100 % 28.7 MiB/s 0s
ERROR: unpackSIF failed: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
FATAL: While performing build: packer failed to pack: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
Cleaning up file based variables
ERROR: Job failed: exit code 1
Any ideas?
You need to finagle it a bit to make it play nice with gitlab CI. The easiest way I found was to clobber the docker entrypoint and have script step be the full singularity build command. We're using this to build our singularity images with v3.6.4, but it should work with v3.7.0 as well.
e.g.,
build-singularity:
image:
name: quay.io/singularity/singularity:v3.7.0
entrypoint: [""]
stage: singularity
script:
- singularity build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
...
edit: the gitlab-runner used must also have privileged enabled. This is the default on the gitlab.com shared runners, but if using your own runners you'll need to make sure that is set in their config.
Related
We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.
I have configured my project to run on a pipeline
Here is my .git.yml file content:
image: markhobson/maven-chrome:latest
stages:
- flow1
execute job1:
stage: flow1
tags:
- QaFunctional
script:
- export
- set +e -x
- mvn --update-snapshots --define updateResults=${updateResults} clean test
Error after executing the pipeline :
bash: line 328: mvn: command not found
Running after_script
00:00
Uploading artifacts for failed job
00:00
ERROR: Job failed: exit status 127
Can anyone help me spot the error please ?
Is that not able to load the docker image?
When I use a shared runner I am able to execute the same.
Error you get means there is no maven installed on job executor mvn: command not found
Looks like image you specified image: markhobson/maven-chrome:latest has maven command:
# docker run markhobson/maven-chrome:latest mvn --version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Other thing you specified is tags:
...
tags:
- QaFunctional
...
So when both image and tags are specified in your yaml then tags takes precedence and image is ignored.
Looks like your custom runner tagged with QaFunctional is shell runner without mvn configured.
As a solution either install mvn on QaFunctional or run job on docker runner (shared runners should do). To avoid such confusion don't specify image when you want to run your job on tagged shell runner.
I'm just getting started with docker and continuous integration with Gitlab. I've added the following gitlab-ci.yml file to the root of my repository:
# Official docker image
image: docker:latest
services:
- docker:dind
build-dev:
stage: build
script:
- docker build -t obikerui/project -f app/Dockerfile.dev ./app
test:
stage: test
script:
- docker run obikerui/project npm run test -- --coverage
The build-dev stage runs and passes but the test stage fails with the following error message:
$ docker run obikerui/project npm run test -- --coverage
Unable to find image 'obikerui/project:latest' locally
docker: Error response from daemon: pull access denied for obikerui/project, repository does not exist or may require 'docker login'.
See 'docker run --help'.
ERROR: Job failed: exit code 125
Can anyone explain what's going wrong and suggest a fix? The repository is private, so do I need to provide some extra configuration to accommodate this?
Each job runs in a different container. You build and you tag your image correctly but that stays in that container.
For the test job a new container starts and that one does not have the image build by the previous job.
You should push your image to a registry (after you tag it accordingly) and then the test job should use the image from the repository.
You can use a public registry like the one offered by Docker or you can run a local container based on the image registry:2 provided by docker. In this case you have to make sure that the domain name pointing to the registry is available on your network (it can be an nginx with reverse proxy)
I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Environment:
GitLab Community Edition 9.5.2
Description:
I used the node:8.4.0 be my main image. It will do something Node.js program in the other jobs, and I will ignore them below.
Here is my .gitlab-ci.yml:
image: node:8.4.0
services:
- docker:latest
stages:
- docker_build
docker_build_job:
stage: docker_build
script:
- sudo docker build -t my_name/repo_name .
- sudo docker images
Problem:
I cannot use the docker command in GitLab runner, and get the message below:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on ci server running on a VM of PEM5208 (5a0ceca0)
Using Docker executor with image node:8.4.0 ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image docker:latest ID=sha256:be47faef67c2e5950a540799e72189867b517010ad8ef98aa0181878d81b0064 for docker service...
Waiting for services to be up and running...
*** WARNING: Service runner-5a0ceca0-project-129-concurrent-0-docker-0 probably didn't start properly.
exit code 1
*********
Using docker image sha256:3f7a536cd71bb3049cc0aa12fb3e131a03a33efe2175ffbb95216d264500d1a1 for predefined container...
Pulling docker image node:8.4.0 ...
Using docker image node:8.4.0 ID=sha256:60bea5b8607945a43b53f5022088a73f2817174e11a3b20f78ea78a45f545d34 for build container...
Running on runner-5a0ceca0-project-129-concurrent-0 via ci...
Fetching changes...
Removing node_modules/
HEAD is now at 472e1e4 Change the version of docker image.
From https://here-is-my-domain/my_name/repo_name
472e1e4..df29530 master -> origin/master
Checking out 472e1e45 as master...
Skipping Git submodules setup
Downloading artifacts for build_installation_job (914)...
Downloading artifacts from coordinator... ok id=914 responseStatus=200 OK token=fMsaFRzG
$ docker build -t my_name/repo_name .
/bin/bash: line 48: docker: command not found
ERROR: Job failed: exit code 1
How should I modify the YAML file of gitlab-ci, make it work successfully?