Failing gitlab CI due to "no such file or directory" - docker

I'm attempting to have my .gitlab-ci.yml file use an image off the Gitlab container registry. I have successfully uploaded the Dockerfile to the registry and I can pull the image from the registry on my local machine and build a container just fine. However, when using the image for my .gitlab-ci.yml file, I get this error:
Authenticating with credentials from job payload (GitLab Registry)
standard_init_linux.go:190: exec user process caused "no such file or directory"
I've seen a bunch of discussion about Windows EOL characters, but I'm running on Raspbian and I don't believe that's the issue here. However, I'm pretty new at this and can't figure out what the issue is. I appreciate any help.
.gitlab-ci.yml file:
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
stages:
- test-version
test:
stage: test-version
image: registry.gitlab.com/my/project/test:latest
script:
- python --version
test.Dockerfile (which is in the registry as registry.gitlab.com/my/project/test:latest):
ARG base_img="python:3.6"
FROM ${base_img}
# Install Python packages
RUN pip install --upgrade pip
Edit:
Another thing to note is that if I change the image in the .gitlab-ci.yml file to just python:3.6, then it runs just fine. It's only when I attempt to link my image in the registry.

As you confirmed in the comments, gitlab.com/my/project is a private repository, so that one cannot directly use docker pull or the image: property with registry.gitlab.com/my/project/test:latest.
However, you should be able to adapt your .gitlab-ci.yml by using the image: docker:latest and manually running docker commands (including docker login).
This relies on the so-called Docker-in-Docker (dind) approach, and it is supported by GitLab CI.
Here is a generic template of .gitlab-ci.yml relying on this idea:
stages:
- test-version
test:
stage: test-version
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded
IMAGE: "registry.gitlab.com/my/project/test:latest"
before_script:
# - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
# or better
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
script:
- docker pull "$IMAGE"
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m(\$0 # line \$LINENO) \$\e[0m ' # optional
set -ex # mandatory
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
python --version
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
## (cont'd) ##
" "$CI_JOB_NAME"
- echo done
This leads to a bit more boilerplate, but this is generic so you can just replace the IMAGE definition and replace the TODO area with your own Bash script, just ensuring that the two items are fulfilled:
If your shell code contains some double quotes, you need to escape them, because the whole code is surrounded by docker run … " and " (the last variable "$CI_JOB_NAME" is a detail, it is optional and just allows one to override the $0 variable referenced within the Bash variable PS4)
If your shell code contains local variables, they need to be escaped (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/sh -c "…" command itself.

Related

how to run a pipeline in gitlab on docker container? closed network error

I have this pipeline that I cant figure out why its running into issues. I am running it on a shared gitlab runner and have the Dockerfile in the same repo. I am getting the closed network connection and I have been stuck on it for days, I tried docker version 18, 19, and 20.
This is to build a custom docker container and deploy the code.
.gitlab-ci.yml
before_script:
- docker --version
#image: ubuntu:18.04 #
#services:
# - docker:18.09.7-dind
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
build-image:
stage:
- build
tags:
- docker
- shared
image: docker:20-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
# entrypoint: ["env", "-u", "DOCKER_HOST"]
# command: ["dockerd-entrypoint.sh"]
script:
- echo "FROM ubuntu:18.04" > Dockerfile
- docker build .
unit-test-job:
tags:
- docker # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- sleep 60
- echo "Code coverage is 90%"
lint-test-job:
tags:
- docker # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- sleep 10
- echo "No lint issues found."
deploy-job:
tags:
- docker # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
Output
Running with gitlab-runner 14.8.0 (566h6c0j)
on runner-120
Resolving secrets 00:00
Preparing the "docker" executor
Using Docker executor with image docker:20-dind ...
Starting service docker:20-dind ...
Pulling docker image docker:20-dind ...
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Waiting for services to be up and running...
*** WARNING: Service runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0 probably didn't start properly.
Health check error:
service "runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-04-25T06:27:22.962117515Z ip: can't find device 'ip_tables'
2022-04-25T06:27:22.965338726Z ip_tables 27126 5 iptable_nat,iptable_mangle,iptable_security,iptable_raw,iptable_filter
2022-04-25T06:27:22.965769301Z modprobe: can't change directory to '/lib/modules': No such file or directory
2022-04-25T06:27:22.984812613Z mount: permission denied (are you root?)
2022-04-25T06:27:22.984847849Z Could not mount /sys/kernel/security.
2022-04-25T06:27:22.984853848Z AppArmor detection and --privileged mode might break.
2022-04-25T06:27:22.984858696Z mount: permission denied (are you root?)
*********
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Preparing environment 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Running on runner-120-concurrent-0 via nikobelly-docker...
Getting source from Git repository 00:01
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/nikobelly/test_pipeline/.git/
Checking out 5d3bgbe5 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
$ docker --version
Docker version 20.10.14, build a224086
$ echo "FROM ubuntu:18.04" > Dockerfile
$ docker build .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": write tcp 172.14.0.4:46336->10.24.125.200:2375: use of closed network connection
Cleaning up project directory and file based variables 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
ERROR: Job failed: exit code 1
So - you're trying to build a docker image inside a container.
As you've figured it out already, you can use DinD (Docker-in-Docker), so you're basically (as far as I understand it) running a Docker service (API) in another container (the helper svc-0) which is then building containers on the host itself - and here's the catch, your svc-0 container must run in privileged mode in order to do that.
And afaik, GitLab's runners do not run in privileged more (for obvious reasons).
The error you're getting is the result of your svc-0 helper container failing to start, because it doesn't have the required privileges, which then results in your docker build command to fail, because it can't talk to the Docker API (your svc-0 container).
Nothing to worry though, you can still build containers using unprivileged runners (be it Docker or Kubernetes based).
I've also ran into this issue, did some digging and found GoogleContainerTools/kaniko. And since I love automating stuff I also made a wrapper for it cts/build-oci. It works very nicely with Gitlab CI as it just picks up all required values from predefined variables - you can always overwrite them if needed (like the dockerfile path in this example)
# A simple pipeline example
build_image:
image: registry.gitplac.si/cts/build-oci:1.0.4
script: [ "/build.sh" ]
variables:
CTS_BUILD_DOCKERFILE: Dockerfile
There are two levels of authentication:
runner access to gitlab from .gitlab-ci.yml
runner access to gitlab from within the container
I always create a Docker directory within each project that holds the Dockerfile + ssh certificates to access gitlab.
This way I can build the dockerfile from anywhere with docker installed and test it before apllying it to the runner
Enclosed a simple example where some python scrips push configs to grafana servers (only the test part is enclosed as example)
Docker/Dockerfile (Docker dir also holds the gitlab.priv + gitlab.publ for a personal gitlab ssh-key that are copied into):
FROM xxxx.yyyy.zzzz:4567/testtools/python/python:3.10.4
ENV DIR /fido2-grafana
ENV GITREPO git#xxxx.yyyy.zzzz:id-pro/test/fido2-grafana.git
ENV KEY_GEN_PATH /root/.ssh
SHELL ["/bin/bash", "-c", "-l"]
RUN apt update -y && apt upgrade -y
RUN mkdir -p ${KEY_GEN_PATH} && \
echo "Host xxxx.yyyy.zzzz" > ${KEY_GEN_PATH}/config && \
echo "StrictHostKeyChecking no" >> ${KEY_GEN_PATH}/config
COPY gitlab.priv ${KEY_GEN_PATH}/id_rsa
COPY gitlab.publ ${KEY_GEN_PATH}/id_rsa.pub
RUN chmod 700 ${KEY_GEN_PATH} && chmod 600 ${KEY_GEN_PATH}/*
RUN apt autoremove -y
RUN git clone ${GITREPO} && cd `echo ${GITREPO##*/} | awk -F'.' '{print $1}'`
RUN cd ${DIR} && pip install -r requirements.txt
WORKDIR ${DIR}
.gitlab-ci.yml:
variables:
TAG: latest
JOBNAME: fido2-grafana
MYPATH: $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$JOBNAME
stages:
- build
- deploy
build-execution-container:
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --pull -t $MYPATH:$TAG Docker
- docker push $MYPATH:$TAG
deploy-boards:
before_script:
- echo "Running ${JOBNAME}:${TAG} to deploy boards"
stage: deploy
image: ${MYPATH}:${TAG}
script:
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS health.json'| tee output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS status.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Fido2 BKS Metrics.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Service uptime.json'| tee -a output.log; exit $?"
artifacts:
name: "${JOBNAME} report"
when: always
paths:
- output.log

Google Cloud Build + Google Secret Manager Substitution Problems

we have a repository that needs to go get a private repo. To do this, we are using an SSH key to access the private repo/module.
We are storing this SSH key using Google Secret Manager and passing it to Docker using the build-arg flag. Now, when we do this locally, the Dockerfile builds and runs as intended. This is the command we use for a local build:
export SSH_PRIVATE_KEY="$(gcloud secrets versions access latest --secret=secret-data)" && \
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
However, when we try to move this setup to Google Cloud Build, we run into 403 forbidden errors from Bitbucket, which leads me to believe that the SSH key is either not being read or formatted correctly.
The full 403 error is:
https://api.bitbucket.org/2.0/repositories/my-repo?fields=scm: 403 Forbidden
Step #0 - "Build": server response: Access denied. You must have write or admin access.
What is even stranger is that when I run the Cloud Build local emulator, it works fine using this command: cloud-build-local --config=builder/cloudbuild-prod.yaml --dryrun=false .
I've tried many different formats and methods, so out of desperation I am asking the community for help. What could be the problem?
Here is our cloudbuild.yaml:
steps:
# Get secret
- id: 'Get Secret'
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=secret-data > /workspace/SSH_PRIVATE_KEY.txt
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt) &&
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
With Cloud Build, when you want to get local linux variable, and not the substitution variable, you have to espace the $ with another $. Look at this:
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt)
docker build --build-arg $$SSH_PRIVATE_KEY -t my-image .
The SSH_PRIVATE_KEY is prefixed by $$ to say: don't look at the substitution variable, but look at the linux variable.
I also remove the && at the end of the export line. The pipe | means: Run each command in succession, line return limit each command
Thanks for all the help! This one was pretty weird. Turns out it's not an issue with Cloud Build or Secret Manager but the Dockerfile I was using.
Instead of setting GOPRIVATE with the command in the Dockerfile below, I was using a statement like RUN export GOPRIVATE="bitbucket.org/odds".
In case anyone runs into something like this again, here's the full Dockerfile that works.
FROM golang:1.15.1
WORKDIR $GOPATH/src/bitbucket.org/gml/my-srv
ENTRYPOINT ["./my-srv"]
ARG CREDENTIALS
RUN git config \
--system \
url."https://${CREDENTIALS}#bitbucket.org/".insteadOf \
"https://bitbucket.org/"
RUN go env -w GOPRIVATE="bitbucket.org/my-team"
COPY . .
RUN make build

How to setup google cloud Cloudbuild.yaml to replicate a jenkins job?

I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.

echo to file in docker file fails when building from armhf/ubuntu in dind

I am using CI pipelines on Gitlab to build docker images for deployment to Raspbian. Since my builds need to access some private NPM packages, I include in the Docker file the following line which creates a token file using the value stored in environment variable $NPM_TOKEN:
RUN echo //registry.npmjs.org/:_authToken=$NPM_TOKEN > ~/.npmrc
This works fine when building from my usual image (resin/raspberrypi3-node). However one of my containers is built from armhf/ubuntu. When the above line is executed, the build fails with the following error:
standard_init_linux.go:207: exec user process caused "no such file or directory"
The command '/bin/sh -c echo //registry.npmjs.org/:_authToken=$NPM_TOKEN >> ~/.npmrc' returned a non-zero code: 1
The build runs fine from docker build on my development machine (Windows 10) but not within the gitlab pipeline.
I have tried stripping down my docker and pipeline files to the bare minimum, and removed the environment variable and the tilde from the path, and this still fails for the ubuntu (but not the resin) image.
Dockerfile.test.ubuntu:
FROM armhf/ubuntu
RUN echo hello > world.txt
Dockerfile.test.resin:
FROM resin/raspberrypi3-node
RUN echo hello > world.txt
gitlab-ci.yml:
build_image:
image: docker:git
services:
- docker:dind
script:
- docker build -f Dockerfile.test.resin . # Succeeds
- docker build -f Dockerfile.test.ubuntu . # Fails
only:
- master
I have searched for similar issues and have seen this error reported when running a .sh file which contained CRLF combinations. Although I am developing on Windows, my IDE (VS Code) is set up to use LF, not CRLF and I have checked all the above files for compliance.
As in here, try and use double-quotes for your echo argument:
RUN echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" > ~/.npmrc
And first, in your Dockerfile, do a RUN ls -alrth ~/ to check the accessibility/presence of the target folder.
That error was also reported in this thread (without any answer), with an example where the final version of the Dockerfile, as seen here, use this .gitlab-ci.yml.
The OP bighairdave confirms in the comments:
I copied the following from the example #VonC gave, and it worked:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
before_script:
- docker run --rm --privileged hypriot/qemu-register

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

Resources