I'd like to set up continuous integration with Gitlab. My application is set up through a number of docker containers, which are put together using docker-compose. My .gitlab-ci.yml looks like:
image: "docker/compose:1.25.0-rc2-debian"
before_script:
- docker --version
- docker info
- docker-compose build
- ./bin/start-docker
rspec:
script:
- bundle exec rspec
rubocop:
script:
- bundle exec rubocop
When I push, it tries to run docker-compose build, which in turn fails to find the docker daemon. This is not completely surprising, because I haven't tried to start the docker daemon. But I would usually do that with systemctl start docker - this fails because the runner doesn't use systemd.
How can I get docker-compose to build?
Some notes: docker --version and docker-compose --version indicate that both docker and docker-compose are installed correctly. If I try docker info, then I get the "cannot find docker daemon` error.
image: "docker/compose:1.25.0-rc2-debian" indicates that you are running your pipeline on docker runner. Try running it on shell runner with docker and docker-compose installed and docker daemon running.
Other way would be to rewrite your docker-compose to .gitlab-ci.yml with proper dependencies.
Related
I want to run a Postman collection from a Docker image in a Gitlab CI pipeline. The Docker socket file is already mounted for the gitlab-ci-runner so the runner has access to the docker server.
Here's the job definition from .gitlab-ci.yaml
postman:
image: docker:20.10.14
stage: schedule
only:
- schedules
before_script: []
script:
- env
- docker run -t -v $CI_PROJECT_DIR:/etc/newman postman/newman run postman_collection.json
The console output of the gitlab CI runner looks like this:
$ docker run -t -v $CI_PROJECT_DIR:/etc/newman postman/newman run postman_collection.json
error: collection could not be loaded
unable to read data from file "postman_collection.json"
ENOENT: no such file or directory, open 'postman_collection.json'
The file exists. I even tried
docker run --rm -it -v $PWD:/etc/newman --entrypoint sh postman/newman
from my localhost console and ran it manually. It's all there. What am I missing?
List item
The Docker socket file is already mounted for the gitlab-ci-runner
The problem here is that in the scenario where you are talking to the host docker daemon (i.e., when the host docker socket in mounted in the job), when you pass the volume argument to docker run like -v /source/path:/container/path the /source/path part of the argument refers to the host filesystem not the filesystem of the job container.
Think about it like this: the host docker daemon doesn't know that its socket is mounted inside the job container. So when you run docker commands this way, it's as if you're running the docker command on the runner host!
Because $PWD and $CI_PROJECT_DIR in your job command evaluates to a path in the job container (and this path isn't on the runner host filesystem) the volume mount will not work as expected.
This limitation is noted in the limitations of Docker socket binding documentation:
Sharing files and directories from the source repository into containers may not work as expected. Volume mounting is done in the context of the host machine, not the build container
The easiest workaround here would likely be to use postman/newman as your image: instead of docker.
myjob:
image:
name: postman/newman
entrypoint: [""]
script:
- newman run postman_collection.json
I need to add a circleCI job, after pulling a docker image (abc) i need to execute a "docker run" command on the container which is created by image abc to finish the job.
circleci_job:
docker:
- image: xyz.ecr.us-west-2.amazonaws.com/abc
steps:
- checkout
- run:
name: execute docker run command
command: |
export env1=https://example.com
docker run abc --some command
I am getting below error:
/bin/bash: line 1: docker: command not found
I wanted to know if i am using a wrong executer type ? or i am missing something here ?
I see two issues here.
You need to use an image that has the Docker client already installed or you need to install it on the fly in your job. Right now it appears that the image xyz.ecr.us-west-2.amazonaws.com/abc doesn't have Docker client installed.
With the Docker executor, in order for Docker commands such as docker run or docker pull to work, you need the special CircleCI step - setup_remote_docker to be run BEFORE you try using Docker.
I have problem with GitLab CI/CD. I try build image and run to server where i have runner. My gitlab-ci.yaml
image: docker:latest
services:
- docker:dind
variables:
TEST_NAME: registry.gitlab.com/pawelcyrklaf/learn-devops:$CI_COMMIT_REF_NAME
stages:
- build
- deploy
before_script:
- docker login -u pawelcyrklaf -p examplepass registry.gitlab.com
build_image:
stage: build
script:
- docker build -t $TEST_NAME .
- docker push $TEST_NAME
deploy_image:
stage: deploy
script:
- docker pull $TEST_NAME
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- docker run -dt -p 8080:80 --name gitlab_learn $TEST_NAME
My Dockerfile
FROM centos:centos7
RUN yum install httpd -y
COPY index.html /var/www/html/
CMD [“/usr/sbin/httpd”,” -D”,” FOREGROUND”]
EXPOSE 80
Docker images is build successfully it is in registry, also deploy is successful, but when i execute docker ps, i don't have running this image.
I do all this same with this tutorial https://www.youtube.com/watch?v=eeXfb05ysg4
What I do wrong?
Job is scheduled in container together with another service container which has docker inside. It works, it starts container but after job finish, neighbour service with docker stops too. You are checking, and see no container on the host.
Try to remove:
services:
- docker:dind
Also, check out predefined list of CI variables. You can omit using hardcoded credentials and image path.
P.S. you use to kill and rm all containers and your CI will someday remove containers which are not managed buy this repo...
when i execute docker ps, i don't have running this image
You didn't mention how you check running container so I assume next considerations
Make sure you physically check at the right runner.
As soon you didn't set any tags on jobs it will pick first available. You can see at which runner it executed at the job page
Make sure your container is not down or finished.
To see all containers use docker ps -a — it shows all container even stopped one. There would be exit code by which you could determine the reason. Debug it with docker logs {container_id} (put container_id without braces)
Gitlab.com:
Not sure you can run a docker application in your Gitlab CI, try removing the -d option in your docker run command which will run the docker in the background.
$ docker run -t -p 8080:80 --name gitlab_learn $TEST_NAME
If this does work, it will probably force the pipeline to never finish and it will drain your CI/CD minutes.
Self-hosted Gitlab:
Your Gitlab CI is meant to run actions to build and deploy your application, so it doesn't make sense to have your application running on the same instance your Gitlab CI runner does. Even if you want to run the app on the same instance, it shouldn't be running on the same container the runner does and to achieve this you should configure Gitlab CI runner to use the docker on the host.
Anyways, would strongly recommend deploying somewhere outside where your Gitlab runner is running and even better to a managed docker service, Kubernetes or AWS ECS.
You did not specify what your setup is, but based on information in your question I can deduce that you're using gitlab.com (as opposed to private GitLab instance) and self-hosted runner with Docker executor.
You cannot use a runner with Docker executor to deploy containers directly to the underlying Docker installation.
There are two ways to do this:
Use a runner with a shell executor as described in the YT tutorial you posted.
Prepare a helper image that will use SSH to connect to your server and run docker commands there.
I'm experiencing strange problem with gitlab CI build.
My script looks like that:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
It was working for a few times, but afterwards I've started getting errors:
docker: Error response from daemon: Conflict. The container name
"/myimage" is already in use by container ....
I've logged on the particular machine where that step was executed, and docker ps -a has shown, that the image was left on the build machine...
I've expected that gitlab CI build steps are fully separated from external environment via running them in docker containers... so that a build would not 'spoil' the environment for other builds. So I've expected all images and containers created by CI build to simply perish... which is not the case...
Is my gitlab somehow misconfigured, or this is expected behaviour, that docker images / containers exists in context of host machine and not within docker image?
In my build, I use image docker:latest
No, your Gitlab is not misconfigured. Gitlab does clean its runners and executors (the docker image you run your commands in).
Since you are using DinD (Docker-in-Docker) any container you start or build is actually build on the same host and runs besides your job executor container, not 'inside' it.
Therefore you should clean up, Gitlab has no knowledge of what you do inside of your job so its not responsible for it.
I run various pipelines with the same situation you described so some suggestions:
job:
script:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage myimage || true
# Remove pulled images
- docker rmi -f myrepo:myimage image
Also (and I don't know your exact job of course) this could be shorter:
job:
script:
# Only pull if you want to 'refresh' any images that would be left behind
- docker pull myrepo:myimage
- docker run --name myimage myrepo:myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage || true
# Remove pulled image
- docker rmi -f myrepo:myimage
The problem was, /var/run/docker.sock was mapped as volume, which caused all docker commands to be invoked on host, not inside image. It's not a misconfiguration per se, but the alternative is to use dind service: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor
It required 3 things to be done:
Add following section to gitlab ci config:
services:
- docker:dind
Define variable DOCKER_DRIVER: overlay2 either in ci config, or globally in config.toml
Load kernel module overlay (/etc/modules)
I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs