GitLab CI/CD - deploy images - docker

I have problem with GitLab CI/CD. I try build image and run to server where i have runner. My gitlab-ci.yaml
image: docker:latest
services:
- docker:dind
variables:
TEST_NAME: registry.gitlab.com/pawelcyrklaf/learn-devops:$CI_COMMIT_REF_NAME
stages:
- build
- deploy
before_script:
- docker login -u pawelcyrklaf -p examplepass registry.gitlab.com
build_image:
stage: build
script:
- docker build -t $TEST_NAME .
- docker push $TEST_NAME
deploy_image:
stage: deploy
script:
- docker pull $TEST_NAME
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- docker run -dt -p 8080:80 --name gitlab_learn $TEST_NAME
My Dockerfile
FROM centos:centos7
RUN yum install httpd -y
COPY index.html /var/www/html/
CMD [“/usr/sbin/httpd”,” -D”,” FOREGROUND”]
EXPOSE 80
Docker images is build successfully it is in registry, also deploy is successful, but when i execute docker ps, i don't have running this image.
I do all this same with this tutorial https://www.youtube.com/watch?v=eeXfb05ysg4
What I do wrong?

Job is scheduled in container together with another service container which has docker inside. It works, it starts container but after job finish, neighbour service with docker stops too. You are checking, and see no container on the host.
Try to remove:
services:
- docker:dind
Also, check out predefined list of CI variables. You can omit using hardcoded credentials and image path.
P.S. you use to kill and rm all containers and your CI will someday remove containers which are not managed buy this repo...

when i execute docker ps, i don't have running this image
You didn't mention how you check running container so I assume next considerations
Make sure you physically check at the right runner.
As soon you didn't set any tags on jobs it will pick first available. You can see at which runner it executed at the job page
Make sure your container is not down or finished.
To see all containers use docker ps -a — it shows all container even stopped one. There would be exit code by which you could determine the reason. Debug it with docker logs {container_id} (put container_id without braces)

Gitlab.com:
Not sure you can run a docker application in your Gitlab CI, try removing the -d option in your docker run command which will run the docker in the background.
$ docker run -t -p 8080:80 --name gitlab_learn $TEST_NAME
If this does work, it will probably force the pipeline to never finish and it will drain your CI/CD minutes.
Self-hosted Gitlab:
Your Gitlab CI is meant to run actions to build and deploy your application, so it doesn't make sense to have your application running on the same instance your Gitlab CI runner does. Even if you want to run the app on the same instance, it shouldn't be running on the same container the runner does and to achieve this you should configure Gitlab CI runner to use the docker on the host.
Anyways, would strongly recommend deploying somewhere outside where your Gitlab runner is running and even better to a managed docker service, Kubernetes or AWS ECS.

You did not specify what your setup is, but based on information in your question I can deduce that you're using gitlab.com (as opposed to private GitLab instance) and self-hosted runner with Docker executor.
You cannot use a runner with Docker executor to deploy containers directly to the underlying Docker installation.
There are two ways to do this:
Use a runner with a shell executor as described in the YT tutorial you posted.
Prepare a helper image that will use SSH to connect to your server and run docker commands there.

Related

Error connecting to docker daemon from a docker run in Bitbucket Pipelines script

In bitbucket pipeline, I have added a step to run docker-bench-security.
step:
name: Docker Bench Security
services:
- docker
caches:
- docker
script:
- docker run -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -v /var/run/docker.sock:/var/run/docker.sock --label docker_bench_security docker/docker-bench-security -e check_1.*
The pipeline execution fails with below error.
Error connecting to docker daemon (does docker ps work?)
Could anyone please help me to fix it?
In Bitbucket Pipelines you don't have an actual /var/run/docker.sock to connect. Imitating how Bitbucket forwards the docker engine to docker pipes, the actual options should be something like
docker run
--env=DOCKER_HOST="tcp://host.docker.internal:2375"
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL"
...

How to run a docker container that has docker running inside it?

I'm building an app that makes api calls to run code inside docker containers
I want to run a docker container that has docker running inside it.
I want to create a docker file that pulls other docker images inside it and then waits for api calls (on port 2376) to create, run and delete containers based on the docker images that i pulled into the dockerfile
This is the dockerfile I'm trying to create right now.
FROM docker:stable
RUN docker pull python
EXPOSE 23788
CMD tail -f /dev/null
However when the RUN command is issued i get this error message:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I don't really know how to start docker inside a docker container.
The reason i need this kind of a docker file is so that i can then use kubernetes to scale this part of my application
There's a special image for this, docker:dind. See the bit about "Docker in Docker" in https://hub.docker.com/_/docker.

What happens to docker images pulled / run in gitlab CI?

I'm experiencing strange problem with gitlab CI build.
My script looks like that:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
It was working for a few times, but afterwards I've started getting errors:
docker: Error response from daemon: Conflict. The container name
"/myimage" is already in use by container ....
I've logged on the particular machine where that step was executed, and docker ps -a has shown, that the image was left on the build machine...
I've expected that gitlab CI build steps are fully separated from external environment via running them in docker containers... so that a build would not 'spoil' the environment for other builds. So I've expected all images and containers created by CI build to simply perish... which is not the case...
Is my gitlab somehow misconfigured, or this is expected behaviour, that docker images / containers exists in context of host machine and not within docker image?
In my build, I use image docker:latest
No, your Gitlab is not misconfigured. Gitlab does clean its runners and executors (the docker image you run your commands in).
Since you are using DinD (Docker-in-Docker) any container you start or build is actually build on the same host and runs besides your job executor container, not 'inside' it.
Therefore you should clean up, Gitlab has no knowledge of what you do inside of your job so its not responsible for it.
I run various pipelines with the same situation you described so some suggestions:
job:
script:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage myimage || true
# Remove pulled images
- docker rmi -f myrepo:myimage image
Also (and I don't know your exact job of course) this could be shorter:
job:
script:
# Only pull if you want to 'refresh' any images that would be left behind
- docker pull myrepo:myimage
- docker run --name myimage myrepo:myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage || true
# Remove pulled image
- docker rmi -f myrepo:myimage
The problem was, /var/run/docker.sock was mapped as volume, which caused all docker commands to be invoked on host, not inside image. It's not a misconfiguration per se, but the alternative is to use dind service: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor
It required 3 things to be done:
Add following section to gitlab ci config:
services:
- docker:dind
Define variable DOCKER_DRIVER: overlay2 either in ci config, or globally in config.toml
Load kernel module overlay (/etc/modules)

Docker deploy my Strongloop Loopback Node server

I'd like to dockerize my Strongloop Loopback based Node server and start using Process Manager(PM) to keep it running.
I've been using RancherOS on AWS which rocks.
I copied (but didn't add anything to) the following Dockerfile as a template for my own Dockerfile:
https://hub.docker.com/r/strongloop/strong-pm/~/dockerfile/
I then:
docker build -t somename .
(Dockerfile is in .)
It now appears in:
docker images
But when I try to start it, exits right away:
docker run --detach --restart=no --publish 8701:8701 --publish 3001:3001 --publish 3002:3002 --publish 3003:3003 somename
AND if I run the strong-pm image and after opening ports on AWS, it works as above with strongloop/strong-pm not somename
(I can browse aws-instance:8701/explorer)
Also, these instructions to deploy my app https://strongloop.com/strongblog/run-create-node-js-process-manager-docker-images/ require:
slc deploy http://docker-host:8701/
but Rancher doesn't come with npm (or curl) installed and when I bash into the vm, slc isn't installed, so seems like slc needs to be "outside" the vm
docker exec -it fb94ddab6baa bash
If you're still reading, nice. I think I'm trying to add a Dockerfile to my git repo that will deploy my app server (including pulling code from repos) on any docker box.
The workflow for the strongloop/strong-pm docker image assumes you are deploying to it from a workstation. The footprint for npm install -g strongloop is significantly larger than strong-pm alone, which is why the docker image has only strong-pm installed in it.

How to properly start Docker inside Jenkins that is also running in Docker

I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/

Resources