I am trying to run the gitlab pipeline jobs locally in order to test and debug.
Here is what I did:
Installed gitlab-runner on my local machine.
sudo gitlab-runner exec docker --docker-privileged --builds-dir /tmp/builds --docker-volumes /home/fox/Work/docker/core-application:/core-application Rspec
This gives:
Runtime platform arch=amd64 os=linux pid=632331 revision=8fa89735 version=13.6.0
Running with gitlab-runner 13.6.0 (8fa89735)
Preparing the "docker" executor
Using Docker executor with image docker:19.03.6 ...
Starting service docker:19.03.6-dind ...
Pulling docker image docker:19.03.6-dind ...
Using docker image sha256:a33335bfe8302f4d8a7688bc1fa539f2aba787ec724119be53adc4681702a3e7 for docker:19.03.6-dind with digest docker#sha256:a4f33d003b7ec9133c2a1ff61f4e80305b329c0fa8b753140b9ab2808f28328c ...
WARNING: Service docker:19.03.6-dind is already created. Ignoring.
Waiting for services to be up and running...
*** WARNING: Service runner--project-0-concurrent-0-aef5122f9d27e6f0-docker-0 probably didn't start properly.
Health check error:
service "runner--project-0-concurrent-0-aef5122f9d27e6f0-docker-0-wait-for-service" timeout
Health check container logs:
....
*********
Pulling docker image docker:19.03.6 ...
Using docker image sha256:6512892b576811235f68a6dcd5fbe10b387ac0ba3709aeaf80cd5cfcecb387c7 for docker:19.03.6 with digest docker#sha256:3eb67443c54436650bd4f1e97ddf9ab1797d75e15d685c791f6c6397edaa6d82 ...
Preparing environment
Running on runner--project-0-concurrent-0 via fox...
Getting source from Git repository
Fetching changes...
Initialized empty Git repository in /tmp/builds/project-0/.git/
Created fresh repository.
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
Then I tried to do it with a gitlab-runner image on the local machine:
docker run --name=runner --privileged -t --rm -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/.gitlab-runner/:/etc/gitlab-runner -v ${PWD}:${PWD} --workdir $PWD gitlab/gitlab-runner exec docker --builds-dir /tmp/builds/ Rspec
I get:
Runtime platform arch=amd64 os=linux pid=7 revision=8fa89735 version=13.6.0
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
fatal: not a git repository: /home/fox/Work/docker/core-application/../.git/modules/core-application
FATAL: exit status 128
Here is what the gitlab documentation says:
If you want to use the docker executor with the exec command, use that
in context of docker-machine shell or boot2docker shell. This is
required to properly map your local directory to the directory inside
the Docker container.
There are no examples. I googled it, but nothing turned out around docker+machine and gitlab-runner.
Can someone tell me how to do it correctly? Any sample?
Thanks.
You have to execute the command from the root folder of your git repository/project:
$ ls -a
./
../
.git/
.gitignore
.gitlab-ci.yml
Makefile
$ gitlab-runner exec docker <job_name>
You are using docker commands which you shouldn't as gitlab-runner already uses those under the hood.
See $ gitlab-runner exec docker --help for more info on the command.
Related
We are running a Kubernetes cluster for building Jenkins jobs. For the pods we are using the odavid/jenkins-jnlp-slave JNLP docker image. I mounted the /var/run/docker.sock to the pod container and added jenkins(uid=1000) user to the docker group on the host systems.
When running a shell script job in Jenkins with e.g. docker ps it fails with error docker: not found.
$ /bin/sh -xe /tmp/jenkins6501091583256440803.sh
+ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
+ docker ps
/tmp/jenkins2079497433467634278.sh: 8: /tmp/jenkins2079497433467634278.sh: docker: not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
The interesting thing is that when connecting into the pod manually and executing docker commands directly in the container as jenkins user, it works:
kubectl exec -it jenkins-worker-XXX -- /bin/bash
~$ su - jenkins
~$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),1000(jenkins)
~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
What is doing Jenkins in its job differently? Same user, same container, only groups=1000(jenkins),1000(jenkins) lists 1000(jenkins) as group 2 times when connecting manually. What am i missing?
/var/run/docker.sock is just the host socket that allows docker client to run docker commands from the container.
What you are missing is the docker client in your container.
Download the docker client manually and place it on a persistent volume and ensure that he docker client is in the system path. Also, ensure that the docker client is executable.
This command will do it for you. You may have to get the right version of the docker client for your environment
curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.03.1-ce.tgz &&
tar --strip-components=1 -xvzf docker-17.03.1-ce.tgz -C /usr/local/bin
You may even be able to install the docker using the package manager for your image.
I have been working through the docker book and I am now learning about CI. I tried to run this script within the execute shell of my build:
# Build the image to be used for this job.
IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF }')
# Build the directory to be mounted into Docker.
MNT="$WORKSPACE/.."
# Execute the build inside Docker.
CONTAINER=$(sudo docker run -d -v $MNT:/opt/project/ $IMAGE /bin/ bash -c 'cd /opt/project/workspace; rake spec')
# Attach to the container so that we can see the output.
sudo docker attach $CONTAINER
# Get its exit code as soon as the container stops.
RC=$(sudo docker wait $CONTAINER)
# Delete the container we've just used.
sudo docker rm $CONTAINER
# Exit with the same value as that with which the process exited.
exit $RC
Running this script ends in the build failing. It shows these two errors:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
and
sudo docker run -d -v /private/var/jenkins_home/jobs/${Docker_test_job}/workspace/..:/opt/project/ /bin/ bash -c cd /opt/project/workspace; rake spec
docker: invalid reference format.
See 'docker run --help'.
+ CONTAINER=
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE
I don't understand how to fix it as I've been following the instructions in the book. I tried using $PWD to try and fix my issue but that didn't work either.
Actaully the jenkins user does not have the permission to run docker command. To do this, add your jenkins user to the docker group:
sudo usermod -aG docker jenkins
Then restart your jenkins server to refresh the group.
Please be informed that ther is a warning "The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system."
I'm new to CI / CD and I'm trying with CircleCI to build and push my app on DockerHub.
I researched some things on the internet, and tried some things, without success.
I'm having an error:
#!/bin/bash -eo pipefail
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAMEr
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Exited with code 1
My config-yml where I am having trouble:
# run tests!
- run: mvn integration-test
- setup_remote_docker
- run:
name: Build and deploy docker images
command: |
docker build -t $HUB_NAME:latest .
- deploy:
name: Push application Docker image
command: |
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAME
It seems to me you should not be using sudo docker in your login, tag and push commands.
Just use docker login, docker tag and docker push without sudo and you should be good to go.
Explanation
The whole point of the setup_remote_docker step, which you are using in your configuration, is to set the environment variables that allow the docker command to access a remote docker environment with the current docker user.
In your pipeline output, if you open the step with the label Setup a remote Docker engine, you'll likely see an output like:
Allocating a remote Docker Engine
[ ... skip some output ...]
Remote Docker engine created. Using VM '...'
Created container accessible with:
DOCKER_CERT_PATH=/tmp/docker-certs(...)
DOCKER_HOST=tcp://XXX.XXX.XXX.XXX:YYYY
DOCKER_MACHINE_NAME=ZZZZ
DOCKER_TLS_VERIFY=1
NO_PROXY=127.0.0.1,localhost,circleci-internal-outer-build-agent,XXX.XXX.XXX.XXX:YYYY
[ ... some more output ...]
If you sudo into another user, you'll be missing out on those environment variables, and the docker command will attempt to connect to the standard docker unix socket in the local machine. Which is why you see:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Check the Building Docker Images documentation to see that they don't use sudo anywhere.
You probably copied those sudo commands from your own environment where your local machine restricts access to the docker unix socket.
I'm unable to run the eclipse/che local image. i.e., from the eclipse/che source code in my pc.
Here are the steps that i tried:
Clone the eclipse/che src code into //d/checmd3/che.
git clone https://github.com/eclipse/che.git &
git checkout tags/7.0.0-beta-2.0
Build it
cd assembly/assembly-main
mvn clean install
...A new assembly is placed in:
cd che/assembly/assembly-main/target/eclipse-che-/eclipse-che-
Run it in docker
docker run -it --rm -v //var/run/docker.sock://var/run/docker.sock -v //d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0
:/che -e CHE_ASSEMBLY=//d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0 -v //d/checmd3/che/tmp:/data eclipse/che start
After step #3 above, the following message was shown:
Unable to find image 'eclipse/che:7.0.0-beta-2.0' locally
7.0.0-beta-2.0: Pulling from eclipse/che
I believe that docker is not trying to run the image from my local pc ?
I'm not sure if step #3 above is the issue or not. Please help me in running the image from the src code cloned in my pc.
(reference : https://github.com/eclipse/che/wiki/Development-Workflow)
I'm unfamiliar with Eclipse Che but, it appears you can simply run their image(s) on your machine assuming that you've Docker installed.
Start by creating a local data directory, perhaps:
mkdir -p ${PWD}/che/data
Then:
docker run
--interactive \
--tty \
--rm \
--net=host \
--volume=/var/run/docker.sock:/var/run/docker.sock \
--volume=${PWD}/che/data:/data \
eclipse/che:nightly start
https://www.eclipse.org/che/docs/che-6/docker-single-user.html
You may not need the --net=host flag
You should then be able to then access the tool:
http://localhost:8080
NB
Your steps 1&2 (git clone... and mvn clean install) are probably redundant. These are likely the commands to build the Docker image. But, since the image already exists in dockerhub, you need not follow these steps.
Try this docker command:
docker run -it --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0:/che \
-e CHE_ASSEMBLY='/d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0' \
-v /d/checmd3/che/tmp:/data \
eclipse/che start
ABOVE COMMAND IS WORKING
INFO: (che init): CHE_VERSION=7.0.0-beta-2.0
INFO: (che init): CHE_CONFIG=/d/checmd3/che/tmp
INFO: (che init): CHE_INSTANCE=/d/checmd3/che/tmp/instance
INFO: (che config): Generating che configuration...
INFO: (che config): Customizing docker-compose for running in a container
INFO: (che start): Preflight checks
mem (1.5 GiB): [OK]
disk (100 MB): [OK]
port 8080 (http): [AVAILABLE]
conn (browser => ws): [OK]
conn (server => ws): [OK]
INFO: (che start): Starting containers...
INFO: (che start): Services booting...
INFO: (che start): Server logs at "docker logs -f che"
INFO: (che start): Booted and reachable
INFO: (che start): Ver: 7.0.0-beta-2.0
INFO: (che start): Use: http://172.26.10.112:8080
INFO: (che start): API: http://172.26.10.112:8080/swagger
Thank you. I tried the following docker command:
docker run -it --rm -v //var/run/docker.sock://var/run/docker.sock -v //d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0
:/che -e CHE_ASSEMBLY='//d/checmd3/che/assembly/assembly-main/target/eclipse-che-7.0.0-beta-2.0/eclipse-che-7.0.0-beta-2.0' -v //d/checmd3/che/tmp:/data eclipse/che start
but, still the message shown is:
Unable to find image 'eclipse/che:latest' locally
latest: Pulling from eclipse/che
(docker is still not using the source code built locally in my pc)
If you want to run your custom Che binaries, the syntax you use is the correct one. Che CLI will pull default image anyway, but your binaries will be mounted into a container. Will that work for you?
If you want to run your own image for some reason, you may simply pass the following env to CLI.
-e IMAGE_CHE=myRegistry/myRepo:myTag
You could try first to prevent Eclipse Che from pulling the image from the Docker hub by setting: CHE_DOCKER_ALWAYS__PULL__IMAGE=false in your che.env config file.
If it doesn't help then I think you need to install and run a local Docker registry, then push your Eclipse Che image that you have built locally to that registry.
docker run -d -p 5000:5000 --name registry registry:2
docker image tag che:7.0.0-beta-2.0 eclipse/che:7.0.0-beta-2.0
docker push eclipse/che:7.0.0-beta-2.0
Then you can pull and run your image using your Docker run command.
You can stop the registry by:
docker container stop registry && docker container rm -v registry
There was an error when i ran the command you suggested:
$ docker run --interactive --tty --rm --net=host --volume=//var/run/docker.sock://var/run/docker.sock --volume=/${PWD}/che/data:/data eclipse/che:nightly start
The following is the log:
Unable to find image 'eclipse/che:latest' locally
latest: Pulling from eclipse/che
d6a5679aa3cf: Pull complete
cc87d3e420c3: Pull complete
afef80a99ec8: Pull complete
d4be2f254bed: Pull complete
3e449e5a7821: Pull complete
5b621c46cfe0: Pull complete
ecdf06277042: Pull complete
dcbe7590a8ca: Pull complete
Digest: sha256:bd853bd40a4fafe73153dda478f1191d3d29447f3d110584933a5fb22e8cb199
Status: Downloaded newer image for eclipse/che:latest
Error: No such image or container: linuxkit-00155d19290d
I didnt get the linuxkit error yesterday :-(
I created a minimal gitlab CI script to verify this error:
docker_execution_test:
image: debian:9
script:
- pwd
- ls
The output I would expect is this:
db#theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 pwd
/
db#theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
However, the output when executed through gitlab-runner is this:
db#theia:~/git/docker_test (master)$ gitlab-runner exec docker docker_execution_test
Runtime platform arch=amd64 os=darwin pid=49585 revision=3afdaba6 version=11.5.0
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-runner 11.5.0 (3afdaba6)
Using Docker executor with image debian:9 ...
Pulling docker image debian:9 ...
Using docker image sha256:4879790bd60d439cfe39c063660eef7af525d5f6f1cbb701a14c7cfc11cbfcf7 for debian:9 ...
Running on runner--project-0-concurrent-0 via theia.local...
Cloning repository...
Cloning into '/builds/project-0'...
done.
Checking out bb973ec4 as master...
Skipping Git submodules setup
$ pwd
/builds/project-0
$ ls
README.md
Job succeeded
What the job is listing is the content of the special gitlab container that's used throughout the build. Why is the container not created? What am I missing here?
As it turns out, gitlab-runner was working as expected. What is quite confusing though is that it does some manipulations to the image it's booting up.
The entrypoint is overridden and the folder the repository is checked out into is mounted into the container with the WORKDIR pointing to it.
So while it is possible to run your own images as containers, you have to keep in mind that you might need to change the folder before running any commands.