I have a custom gitlab ci that I want to compile a Golang app and build a docker image. I have decided to use alpine docker image for the gitlab runner. I can't seam to get docker started. I have tried to manually start docker and get an error ( * WARNING: docker is already starting ) and if I don't manually start the docker service I get (Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) Any one else experience this?
This would not be a duplicate question. Gitlab runner runs the docker alpine container in root (verified by running whoami). For the sake of trying I did try usermod -aG docker $(whoami) and had the same output.
.gitlab-ci.yml
image: alpine
variables:
GO_PROJECT: linkscout
before_script:
- apk add --update go git libc-dev docker openrc
- mkdir -p ~/go/src/${GO_PROJECT}
- cp -r ${CI_PROJECT_DIR}/* ~/go/src/${GO_PROJECT}/
- cd ~/go/src/${GO_PROJECT}
- service docker start # * WARNING: docker is already starting
stages:
- compile
- build
compile:
stage: compile
script:
- go get
- go build -a
build:
stage: build
script:
- docker version # If I don't run (service docker start) I get this message: Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?)
By default you cannot use Docker-in-docker. You should configure your runner like this. Then, as stated in the explanation also use docker:latest as image instead of alpine.
Related
I'm trying to write-down a Dockerfile to create create and register a new runner to a private gitlab repository. According to gitlab documentation, I wrote down the following Dockerfile:
FROM gitlab/gitlab-runner:latest
RUN gitlab-runner register \
--non-interactive \
--url "https://gitlab.com/" \
--registration-token "GITLAB_REPO_TOKEN" \
--executor "docker" \
--docker-image alpine:latest \
--description "docker-runner" \
--maintenance-note "Free-form maintainer notes about this runner" \
--run-untagged="true" \
--locked="false"
Then build it with:
docker build -t test .
And then run it in a container via:
docker run test:latest
The runner is correctly seen by gitlab (the runner is available under Settings\CI/CD\Runners).
Then, I set up the following CI, for testing:
image: python:3.7-alpine
testci:
stage: test
script:
- python test.py
The job is then pulled by the runner, but I immediately get the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on docker-runner yVa1JDny, system ID: xxxxxxxxx
Preparing the "docker" executor
00:09
ERROR: Failed to remove network for build
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (docker.go:753:0s)
Can anyone please provide support in that? I didn't get what it is missing from the configuration I've made.
I've tried to modify the docker run call trying with the volume mount guide found here, but nothing changes.
I've also found here a similar Dockerfile, but using a gitlab-ci-multi-runner which is not the desired service.
You're attempting to use the docker executor for your runner, but your runner doesn't have access to the docker socket in order to create new containers. Your runner manager (what your docker file is creating) is attempting to start up new docker containers to handle each of your jobs, but failing to connect to docker.
In your docker run command, you will need to do a couple things:
Set your container to use privileged mode: --privileged
Map in the docker socket: -v /var/run/docker.sock:/var/run/docker.sock
With those two things, you can likely connect to the docker daemon and start new containers. If you want to perform docker builds within CI using this runner, note you'll need to configure your runner manager (again, what your docker file is creating) to allow these same two settings on the build containers. You can get information about how to do that here: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
I want to build and test my app using dockerfile located in other private repository.
For that I'm using Alpine official docker image in which i run a bash script for cloning my private repo and running docker for building the docker image. This is how my .gitlab-ci.yml looks like.
image: alpine:3.15
stages:
- main
main-job:
stage: main
script:
- apk add --update docker openrc
- rc-update add docker boot
- apk add bash git curl
- bash build.sh $GH_TOKEN $REPO
And I have simple script in build.sh
git clone https://${GH_TOKEN}#github.com/${REPO} source
cd source || exit 1
docker container prune --force || true
docker build . --rm --force-rm --compress --no-cache=true --pull --file Dockerfile -t test-app
docker image ls
docker run --privileged --rm -i test-app
But Docker don't start and spams error.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
Also tried with other command in ubuntu docker like service start docker , dockerd, service restart docker and others.
But nothing seems to works as i guess we can't run docker inside a docker or something.
Can we have any alternative idea to it?
Looks like you don’t have a docker agent running. You can use the docker in docker service by adding the following:
services:
- docker:dind
See the GitLab-ci docs on building docker images for more info: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
I've meet the same issue. May be you have to permit gitlab-runner on your host.
sudo usermod -aG docker gitlab-runner
I‘d suggest you build and push your built image to dockerhub. Then you can start the container referencing your prebuilt image.
I am trying to configure gitlab CI/CD runner. On the runner, I have deployed maven and java that builds my project and executes the test. So far so good, but the final step which it should pakage the code as a docker image and deploy fails. Here is the script which runs fine in cloud.But it says docker command not found in local, and I did not understand the workflow. Now for that to run, am I supposed to install docker on to my runner ? As the runner itself is a container inside docker. I could not figure out what should I do for this step to run. Please help.
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/imran_yusubov/gs-spring-boot-docker .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/imran_yusubov/gs-spring-boot-docker
How are you starting the runner?
The proper way to start the runner would be:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Where you pass your docker socket and then in your pipeline you would have to call the docker:dind service in order to be able to run Docker in Docker which will allow you to build Docker images and run containers
You could find more info in this tutorial
I have a node-based project and following are the first few steps that are required to be executed as part of the build:
npm install
npm run build
docker build -t client .
The last command above builds the following Dockerfile:
FROM docker.artifactory.abc.net/nginx
COPY build /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
Content of .gitlab-ci.yml:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
stage: build
script:
- npm install
- npm run build
- docker build -t client .
In the above Dockerfile, i am using a custom node image (node:1.0) which contains the proxy settings for apk to work and Artifactory configuration so all the dependencies are fetched using Artifactory. Now when i was running this build, i was getting docker: command not found error while executing the last command (docker build -t client .), which is expected because the base image is for node and doesn't contain docker. So i added docker setup instructions to the node Dockerfile based on this link except for the last 3 lines where it's configuring the ENTRYPOINT and CMD.
Now when i ran the build, i got:
$ docker build -t client .
Sending build context to Docker daemon 372.7MB
Step 1 : FROM docker.artifactory.abc.net/nginx
Get https://docker.artifactory.abc.net/v2/nginx/manifests/latest: unknown: Authentication is required
ERROR: Job failed: exit code 1
This error, as per my past experience, had to do with running docker login command. Since the docker setup in official image uses tar, i had to add docker user to /etc/group and then add current user (root) to the docker group. Also added the docker login command as shown below to the Dockerfile:
addgroup docker; \
adduser root docker; \
docker login docker.artifactory.abc.net -u svc-art -p "ZTg6#&kq"; \
After that, if i try building this Dockerfile, i get following error:
+ dockerd -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ docker -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ adduser root docker
+ tail -2 /etc/group
node:x:1000:node
docker:x:101:root
+ docker login docker.artifactory.abc.net -u svc-art -p ZTg6#&kq
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I also did an ls -ltr /var/run/docker.sock; and the docker socket file was not present inside the image. This seems to be the issue.
Any idea how i can get this working?
Well from the example you have provided I cannot see where you call your docker service, therefore I assume you are not calling it also you are not logging into the registry.
The way your pipeline should look like is something as follows:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
image: docker:latest
services:
- docker:dind
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest
You could also find more info here
I'm getting the error docker: command not found while running the following CI script inside gitlab-ci. This error is happening during before_script for the deploy phase.
services:
- docker:dind
stages:
- build
- test
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
image: docker:latest
script:
- docker info
- docker version
- docker build --pull -t $SERVICE_NAME:$CI_COMMIT_REF_NAME .
- docker image list
- docker tag $SERVICE_NAME:$CI_COMMIT_REF_NAME $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
test:
image: docker:latest
stage: test
script:
- docker pull $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker image list
- docker run $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME npm test
deploy:
image: google/cloud-sdk
stage: deploy
environment: Production
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/GCLOUD_KEYFILE.json
- gcloud auth activate-service-account --key-file /tmp/GCLOUD_KEYFILE.json
- rm /tmp/GCLOUD_KEYFILE.json
- gcloud info
- gcloud components list
only:
- master
I'm a bit confused, because I'm runing docker-in-docker (docker:dind) as a service so the docker command should be made available to all stages (if I understand this correctly), however it's clearly not.
Is it due to an interaction with google/cloud-sdk ?
You probably misunderstood what services mean. From the doc,
The services keyword defines just another docker image that is run during your job and is linked to the docker image that the image keyword defines.
What you need is a custom docker executor that uses dind image and preinstalled with gcloud sdk. You can build such an image with this Dockerfile:
FROM docker:latest
RUN apk add --no-cache \
bash \
build-base \
curl \
git \
libffi-dev \
openssh \
openssl-dev \
python \
py-pip \
python-dev
RUN pip install docker-compose fabric
RUN curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
The question was asked almost 5 years ago, I am unsure if by that time the image google/cloud-sdk shipped without docker binaries, I can't think of anything else for a docker: command not found error more than it was not available in the standard location. Anyways, today 2022 google/cloud-sdk comes with docker and it can interact with the docker service, and since I ended up here several times after running into problems trying to use docker:dind and google/cloud-sdk, I will add the following:
Is possible to use docker from the google/cloud-sdk image, there is no need to create a custom image for your Gitlab CI. The problem is that docker in google/cloud-sdk tries to connect to the socket in /var/run/docker.sock as is presented in the logs:
$ docker build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Anyways you can also checks in your logs of the service docker:dind that docker listens in a socket (not reachable from the job container) and a tcp port (reachable via the hostname docker). So, you just need to use the tcp port in your docker commands, either by setting the env variable DOCKER_HOST or adding a -H tcp://docker:2375 as in
$ docker -H tcp://docker:2375 build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Step 1/8 : FROM python:latest
You forgot to inform the image tag at the top.
image: docker:latest
services:
- docker:dind
...
Works for me! :)
See: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html