Gitlab CI Runner Docker Executor Expose Ports - docker

I have gitlab ci and gitlab containers. A project is registered with gitlab runner
using docker executor. Everything is OK. I set privileged mode true. There are flags about docker run such as volume share , privileged mode, image , service , link etc. But i could not find the flags in the runner.dockers section about port expose. My aim is to run a pipeline with container can communicate its ports.
Is it possible to implement this issue with gitlab runner ci.

Normally that's what services are for. You'd take a container that you want to expose ports on and define it as a service. That way, there are no exposed ports, but there is a service link which you can use for inter-container communication. That's valid for the Docker executor, in a Kubernetes executor all services are part of the pod and therefore available directly on the localhost.
In other words: if, for example, you need a PostgreSQL for your build job running on its default port of 5432, you just start postgres:latest as a service for your job. You can then reference it via postgres:5432 with a Docker executor and localhost:5432 with Kubernetes executor.
If services do not fit your use case, you might want to expand your question as to where they fail, there might an alternative answer.

Related

connecting to an insecure local docker registry in uncontrolled CI environment

I'm building a microservice that performs operations on a docker registry.
The microservice i'm building has a test which starts a docker-registry via the docker-registry image in Docker Hub, so the microservice can connect to it, set it up, work on it etc...
The test fails in CI: The Docker client can't connect to the test-registry because it's insecure. This is in CI, and dynamic, different random ip/port each time, and the docker daemon is used by other parallel tests... so having the test edit the global jsons and restarting docker daemon seems like a bad solution.
Has anyone solved this? how do you test integration with docker-registry in CI? am i doomed to modify the global docker jsons and restart/trigger reload of config?
Some specifics:
The build tool is Bazel and runs in GCB so the test itself runs in RBE workers on the Google cloud which are isolated and don't have network access when running the tests and i can't really configure too much, it's not my machine, it's a radon machine each time for each test etc...
we ended up starting another container that has a docker daemon in it (without mounting the external docker daemon socket, so it's actually another docker daemon instance).
we do this at our leisure, so only after we know the private registry address and configure the docker daemon to startup with insecure registry flag.
in order for the containers to communicate we had their container have a name and share a network.

Rancher CLI random host port mapping

I am planning to use rancher for managing my containers. On my dev box, we plan to bring up several containers each serving a REST api.
I am able to automate the process of building up my containers using jenkins and want to run the container using rancher to take advantage of random host port mapping. I am able to do this using rancher UI but unable to find the way to automate it using cli.
ex:
Jennkins builds Container_A exposes 8080 -> Jenkins also executes rancher cli to run the container mapping 8080 to a random port of host. And the same for Container_B exposing 8080.
Hope my question makes sense.
Thanks
Vijay
You should just be able to do this in the service definition in the Docker compose yaml file:
...
publish:
8080
...
If you generate something in the UI and look at the configuration of the stack, you'll see the corresponding compose yml.
Alternatively, you can use:
rancher run --publish 8080 nginx
then get the randomly assigned port:
rancher inspect <stackname>/<service_name> | jq .publicEndpoints[].port

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

docker swarm http connectivity

new to docker and docker swarm. Trying docker and docker swarm both.
initially i had started a docker daemon and was able to connect it on http port i.e. 2375. I had installed docker colud plugin in jenkins and added http://daemon-IP:2375 and was able to create containers. well it creates a container, does my build inside it and destroys the container.
My Query is, will i be able to connect to docker swarm on http port, the same way i a am connecting to a standalone docker daemon ? is there any documentation on it. or the my understanding about the swarm is wrong.
please suggest.
Thanks
Yeah you can connect to a remote host the same way you are doing via the Unix Socket. People very often forget that docker is a client-server architecture and your "docker run..." commands are basically just commands issued by the docker client.
If you set certain environment variables:
DOCKER_HOST=tcp:ip.address.of.host:port
DOCKER_TLS_VERIFY=1
DOCKER_CERTS=/directory/where/certs/are
(The last two are optional for TLS connections, which I would highly recommend. You'd have to setup https://docs.docker.com/engine/security/https/ which is recommended for a production environment)
Once you've set your DOCKER_HOST environment variable, if you issue a docker command and get a response, it will be from the remote host if everything is setup correctly.

Can you run Dind as a service on Tutum so that Drone can use it?

I'm new to Docker and Drone but I'm liking what I've found so far :)
Can you run Dind as a service on Tutum so that Drone can use it?
Drone CI is designed to run on a Docker host and to spin up whatever containers it needs.
It seems that drone itself can be run in a container but it must have access to the host docker daemon.
As far as I can see on Tutum you don't really have access to the docker daemon from the host.
It's possible to run drone in Dind (Docker in Docker).
But could I just run a container running Dind that I could point my drone container at via DOCKER_HOST, or am I completely misunderstanding the relationship between Drone and Docker?
It turns out you can and it all seems to work just fine :)
I have my "node" in tutum speak, which has docker running on it, but it's tutum's docker that you can interact with to some extent using their api.
Inside that I have an off the shelf dind container (docker in docker) running as a daemon with its listening port specified in the PORT environment variable (which wrapdocker picks up). That port is exposed (not publicly) using tutum's interface.
Drone is configured from another off the shelf container (for github etc) and it's linked to the dind service so that drone's DOCKER_HOST environment variable can be set to: {linked dind alias}:{port number}
...and it works :)
I feel like this should have been clear from the start but I just don't think that I believed it!

Resources