docker-compose in gitlab-ci: expose ports - docker

I like to set up a gitlab repository and a gitlab-ci with docker-compose for integration tests.
I finally managed to start some containers with docker-compose.
image: docker/compose:1.29.2
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_HOSTNAME: myhost
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "" #TODO
services:
- name: docker:dind
alias: docker
command: [
"--registry-mirror=https://artifactory.mycorp.net"
]
Now I am faced with the problem, that I need network interaction with some services that run on a windows server with (for the tests irrelevant) UI and stuff - so I can not dockerize them with adequate effort.
The gitlab-runner runs exclusively on the server for this one project only!
My idea is, I need to get the docker:dind-service to be on the host-network, so the docker-containers that are spawned inside that service will be available through their explicitly exposed ports. However I have no clue on how I might achieve that.
Any other way to solve that problem is also welcome!

I figured a solution that seems to run for now:
Create a network, e.g docker network create gitlab-runner (bridge-mode)
Configure the gitlab-runner to use that network by setting the network_mode to afore created networks name (e.g. "gitlab-runner").
Start the dind-container manually, connected to afore created network, exposing the necessary port(ranges)
Don't create the dind-container as service in the .gitlab-ci.yml
So far it seems to work for a minimal example starting up zookeeper, kafka and a kafka-restproxy.
For the full project I still have some errors, but I assume they are unrelated to this issue. If it turns out to be wrong, I'll keep you updated.
Actually the errors are related to this issue: With this method, the checked out project is available in the docker/compose-container. When starting the containers, this is done from the context of the dind-container, in which the files are not present.
The files could be copied with a "build"-step in a new docker container first or made available through a shared volume.

You gotta make sure that both services are within the same docker network to achieve what you want. Generally when creating images they are assume different networks but this can be configured by u to allow them share the same network.
Below is an example of what I mean. Do the same for ur containers
docker network create db_network docker run -d \ --name mysql-spi1 \ --publish 3306 \ --network db_network \ --restart unless-stopped \ --env MYSQL_ROOT_PASSWORD=ebsbduabdc \ --volume mysqlspi-datadir:/var/lib/mysql \ mysql:8 \ --default-authentication-plugin=mysql_native_password

Related

Advantage of using docker-compose file version 3 over a shellscript?

My initial reason for creating a docker-compose.yml, was to take advantage of features such as build: and depends-on: to make a single file that builds all my images and runs them in containers. However, I noticed version
3 depreciates most of these functions, and I'm curious why I would use this over building a shellscript.
This is currently my shellscript that runs all my containers (I assume this is what the version 3 docker-compose file would replace if I were to use it):
echo "Creating docker network net1"
docker network create net1
echo "Running api as a container with port 5000 exposed on net1"
docker run --name api_cntr --net net1 -d -p 5000:5000 api_img
echo "Running redis service with port 6379 exposed on net1"
docker run --name message_service --net net1 -p 6379:6379 -d redis
echo "Running celery worker on net1"
docker run --name celery_worker1 --net net1 -d celery_worker_img
echo "Running flower HUD on net1 with port 5555 exposed"
docker run --name flower_hud --net net1 -d -p 5555:5555 flower_hud_img
Does docker-swarm rely on using stacks? If so then I can see a use for docker-compose and stacks, but I couldn't seem to find an answer online. I would use version 3 because it is compatible with swarm, unlike version 2 if what I've read it true. Maybe I am missing the point of docker-compose completely, but as of right I'm a bit confused as to what it brings to the table.
Readability
Compare your sample shell script to a YAML version of same:
services:
api_cntr:
image: api_img
network: net1
ports:
- 5000:5000
message_service:
image: redis
network: net1
ports:
- 6379:6379
celery_worker1:
image: celery_worker_img
network: net1
flower_hud:
image: flower_hud_img
network: net1
ports:
- 5555:5555
To my eye at least, it is much easier to determine the overall architecture of the application from reading the YAML than from reading the shell commands.
Cleanup
If you use docker-compose, then running docker-compose down will stop and clean up everything, remove the network, etc. To do that in your shell script, you'd have to separately write a remove section to stop and remove all the containers and the network.
Multiple inheriting YAML files
In some cases, such as for dev & testing, you might want to have a main YAML file and another that overrides certain values for dev/test work.
For instance, I have an application where I have a docker-compose.yml as well as docker-compose.dev.yml. The first contains all of the production settings for my app. But the "dev" version has a more limited set of things. It uses the same service names, but with a few differences.
Adds a mount of my code directory into the container, overriding the version of the code that was built into the image
Exposes the postgres port externally (so I can connect to it for debugging purposes) - this is not exposed in production
Uses another mount to fake a user database so I can easily have some test users without wiring things up to my real authentication server just for development
Normally the service only uses docker-compose.yml (in production). But when I am doing development work, I run it like this:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
It will load the normal parameters from docker-compose.yml first, then read docker-compose.dev.yml second, and override only the parameters found in the dev file. The other parameters are all preserved from the production version. But I don't require two completely separate YAML files where I might need to change the same parameters in both.
Ease of maintenance
Everything I described in the last few paragraphs can be done using shell scripts. It's just more work to do it that way, and probably more difficult to maintain, and more prone to mistakes.
You could make it easier by having your shell scripts read a config file and such... but at some point you have to ask if you are just reimplementing your own version of docker-compose, and whether that is worthwhile to you.

How can one Docker container call another Docker container

I have two Docker containers
A Web API
A Console Application that calls Web API
Now, on my local web api is local host and Console application has no problem calling the API.However, I have no idea when these two things are Dockerized, how can I possibly make the Url of Dockerized API available to Dockerized Console application?
i don't think i need a Docker Compose because I am passing the Url of API as an argument of the API so its just the matter of making sure that the Dockerized API's url is accessible by Dockerized Console
Any ideas?
The idea is not to pass the url, but the hostname of the other container you want to call.
See Networking in Compose
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This is what replace the deprecated --link option.
And if your containers are not running on a single Docker server node, Docker Swarm Mode would enable that discoverability across multiple nodes.
This is the best way I have found to connect multiple containers in a local machine / single cluster.
Given: data-provider-service, data-consumer-service
Option 1: Using Network
docker network create data-network
docker run --name=data-provider-service --net=data-network -p 8081:8081 data-provider-image
docker run --name=data-consumer-service --net=data-network -p 8080:8080 data-consumer-image
Make sure to use URIs like: http://data-provider-service:8081/ inside your data-consumer-service.
Option 2: Using Docker Compose
You can define both the services in a docker-compose.yml file and use depends_on property in data-provider-service.
e.g.
data-consumer-service:
depends_on:
- data-provider-service
You can see more details here on my Medium post: https://saggu.medium.com/how-to-connect-nultiple-docker-conatiners-17f7ca72e67f
You can use the link option with docker run:
Run the API:
docker run -d --name api api_image
Run the client:
docker run --link api busybox ping api
You should see that api can be resolved by docker.
That said, going with docker-compose is still a better option.
The problem can be solved easily if using compose feature. With compose, you just create one configuration file (docker-compose.yml) like this :
version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
To make it run, just call up like this :
docker-compose up
This is the best way to run all your stack, so, check this reference :
https://docs.docker.com/compose/
Success!

Start Docker Containers on logon under Windows

I've just set up a new Windows 10 development machine and so as to minimise the hassle of installs I've got various dev dependencies (Oracle, MongoDB, RabbitMQ, HAProxy, etc.) running under Docker using a docker-compose script.
I'd like to automatically start these containers on Windows logon but as yet I haven't figured out a way to do this; a simple script that executes docker-compose up -d in the correct directory should do it, but if it executes immediately on logon Docker hasn't yet started up so the script fails. Does anyone know how to programatically wait until docker is running?
To further elaborate on my comment i have done a little test with a webserver service, but it should work for any service, as long as you configure it the way you want it to behave.
Its quite easy to set this up using the following commands:
docker swarm init
Then for example a webserver
docker service create --name webserver --publish 80:80 httpd
Or even a database
docker service create --replicas 1 --name database --publish 1433:1433 -e "ACCEPT_EULA=y" -e "SA_PASSWORD=test" microsoft/mssql-server-linux
These will restart after a reboot and on fatal crashes automatically because of the requested amount of replicas (1 by default) that Docker swarm keeps alive for you.
Hopefully this can be of some help!
Turns out this is really easy to achieve via docker-compose using restart! Have changed out compose file as follows:
version: '2'
services:
rabbitmq:
image: rabbitmq:3.6-management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- /var/lib/rabbitmq
restart: unless-stopped
This extra restart directive means that unless the container has been explicitly stopped it will start up with docker on logon/reboot. Tested and working!

Docker RabbitMQ persistency

RabbitMQ in docker lost data after remove container without volume.
My Dockerfile:
FROM rabbitmq:3-management
ENV RABBITMQ_HIPE_COMPILE 1
ENV RABBITMQ_ERLANG_COOKIE "123456"
ENV RABBITMQ_DEFAULT_VHOST "123456"
My run script:
IMAGE_NAME="service-rabbitmq"
TAG="${REGISTRY_ADDRESS}/${IMAGE_NAME}:${VERSION}"
echo $TAG
docker rm -f $IMAGE_NAME
docker run \
-itd \
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \
--name "service-rabbitmq" \
--dns=8.8.8.8 \
-p 8080:15672 \
$TAG
After removing the container, all data are lost.
How do I configure RabbitMQ in docker with persistent data?
Rabbitmq uses the hostname as part of the folder name in the mnesia
directory. Maybe add a --hostname some-rabbit to your docker run?
I had the same issue and I found the answer here.
TL;DR
Didn't do too much digging on this, but it appears that the simplest way to do this is to change the hostname as Pedro mentions above.
MORE INFO:
Using RABBITMQ_NODENAME
If you want to edit the RABBITMQ_NODENAME variable via Docker, it looks like you need to add a hostname as well since the Docker hostnames are generated as random hashes.
If you change the RABBITMQ_NODENAME var to something static like my-rabbit, RabbitMQ will throw something like an "nxdomain not found" error because it's looking for something likemy-rabbit#<docker_hostname_hash>. If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
UPDATE
I previously said,
If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
This would not work as described precisely because the default docker host name is randomly generated at launch, if it is not assigned explicitly. The hurdle would actually be to make sure you use the EXACT SAME <docker_hostname_hash> as your originating run so that the data directory gets picked up correctly. This would be a pain to implement dynamically/robustly. It would be easiest to use an explicit hostname as described below.
The alternative would be to set the hostname to a value you choose -- say, app-messaging -- AND ALSO set the RABBITMQ_NODENAME var to something like rabbit#app-messaging. This way you are controlling the full node name that will be used in the data directory.
Using Hostname
(Recommended)
That said, unless you have a reason NOT to change the hostname, changing the hostname alone is the simplest way to ensure that your data will be mounted to and from the same point every time.
I'm using the following Docker Compose file to successfully persist my setup between launches.
version: '3'
services:
rabbitmq:
hostname: 'mabbit'
image: "${ARTIFACTORY}/rabbitmq:3-management"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- "./data:/var/lib/rabbitmq/mnesia/"
networks:
- rabbitmq
networks:
rabbitmq:
driver: bridge
This creates a data directory next to my compose file and persists the RabbitMQ setup like so:
./data/
rabbit#mabbit/
rabbit#mabbit-plugins-expand/
rabbit#mabbit.pid
rabbit#mabbit-feature_flags

Managing a group of docker containers without the sweat

I am using a bash script to spin up a virtual network with two docker containers on it. This feels prehistoric. Is there some tool that can spin such an ensemble up and down & show its current status, or does one have to take care of that on their own?
In case docker-compose, it is unclear from docker documentation whether docker-compose is self-contained or tied to swarm, and an authoritative example of a compose definition file, with commands for starting and stopping the ensemble would be very helpful.
E.g. here is what a bash script would do to define/start an application of two interrelated containers, needless to say this script does not help with managing its lifecycle beyond just starting it up once.
docker network create --driver bridge FooAppNet
docker run --rm --net=FooAppNet --name=component1 -p 9000:9000 component1-image
docker run --rm --net=FooAppNet --name=component2 component2-image
Also in this example, container component1 exposes port 9000 to the host, and its contained application has it hardwired in its configuration file, to consume the service of component2 by its name (following the common docker networking practice relying on docker networks' internal DNS).
For the example you've given, the following Docker Compose file would give you what you want:
component1:
image: component1-image
net: FooAppNet
container_name: component1
ports:
- "9000:9000"
component2:
image: component2-image
net: FooAppNet
container_name: component2
If you store this in a docker-compose.yml file and then run docker-compose up -d it will create/start/restart your containers and assign them to your FooAppNet network.
The -d flag runs the containers in detached mode and prevents the logging output being printed to your terminal window when you start the containers. You can still get their log via docker logs -f ... like with any other container.
You can then use docker-compose down and docker-compose restart etc to control the ensemble's lifecycle. As an aside, using variables can spice up the definition file towards greater flexibility.
See in the comments below about using the network automatically spun up by docker compose.
TL;DR ― see the beginning section of https://docs.docker.com/compose/networking/ for the solution. It walks you through the entire necessary configuration. Works nicely, and need to master the various docker-compose command-line options to be productive with it.

Resources