envoy proxy docker swarm yml config for fixed paths - docker-swarm

I am using docker swarm, and create services using docker service create command.
I have fixed mapping path /a goes to docker service serviceA and /b goes to docker service serviceB.
I couldn't find a working example where i can use docker swarm and paths to services.

Related

GitLab Docker-in-Docker: how does Docker client in job container discover Docker daemon in `dind` service container?

I have a GitLab CI/CD pipeline that is being run on GKE.
One of the jobs in the pipeline uses a Docker-in-Docker service container so that Docker commands can be run inside the job container:
my_job:
image: docker:20.10.7
services:
- docker:dind
script:
- docker login -u $USER -p $PASSWORD $REGISTRY
- docker pull ${REGISTRY}:$TAG
# ...more Docker commands
It all works fine, but I would like to know why. How does the Docker client in the my_job container know that it needs to communicate with the Docker daemon running inside the Docker-in-Docker service container, and how does it know the host and port of this daemon?
There is no 'discovery' process, really. The docker client must be told about the daemon host through configuration (e.g., DOCKER_HOST). Otherwise, the client will assume a default configuration:
if the DOCKER_HOST configuration is present, this is used. Otherwise:
if the default socket (e.g., unix:///var/run/docker.sock) is present, the default socket is used.
if the default socket is NOT present AND a TLS configuration is NOT detected, tcp://docker:2375 is used
If the socket is NOT present AND a TLS configuration is present, tcp://docker:2376 is used
You can see this logic explicitly in the docker dockerfile entrypoint.
The docker client can be configured a couple ways, but the most common way in GitLab CI and with the official docker image is through the DOCKER_HOST environment variable. If you don't see this variable in your YAML, it may be set as a project or group setting or may be set on the runner configuration, or is relying on default behavior described above.
It's also possible, depending on your runner configuration (config.toml), that your job is not using the docker:dind service daemon at all. For example, if your runner has a volumes specification mounting the docker socket (/var/run/docker.sock) into the job container and there is no DOCKER_HOST (or equivalent) configuration, then your job is probably not even using the service because it would use the mounted socket (per the configuration logic above). You can run docker info in your job to be sure of this one way or the other.
Additional references:
official docker image entrypoint logic
Securing the daemon socket
GitLab docker in docker docs
Docker "context"

Create Docker Service within Docker Service

Is it possible to spawn Docker Services within a container running on Docker swarm? This would allow containers to dynamically maintain the components running in the swarm.
Currently I am able to run containers within other containers on the host machine by mounting the /var/run/docker.sock into the container while using the docker-py SDK.
docker run -v /var/run/docker.sock:/var/run/docker.sock master
Inside the container I have a python script that runs the following:
container = docker.from_env().containers.run('worker', detach=True, tty=True, volumes=volumes, network='backend-network', mem_limit=worker.memory_limit)
Is something similar to this possible in Docker Swarm, not just vanilla Docker?
You can mount the Docker socket and use the docker module as you're doing now, but create a service, assuming you're on a manager node.
some_service = docker.from_env().services.create(…)
https://docker-py.readthedocs.io/en/stable/services.html

Docker: Does container inherit /etc/hosts from docker host?

In case of I have a machine that running docker (docker host) and spin up some containers inside this docker host,
I need containers' services be able to talk to each other - container expose ports and they also need to resolve by hostname (e.g: example.com)
container A needs to talk to container B with URL: example.com:3000
I've read this article but not quite sure about "inherit" from docker host, does docker host's /etc/hosts will be appended to container's /etc/hosts that running inside docker host?
https://docs.docker.com/engine/reference/run/#managing-etchosts
How to achieve?
Does this "inherit" has any connect to type of docker container networking https://docs.docker.com/v17.09/engine/userguide/networking/ ?
It does not inherit the host's /etc/hosts file. The file inside your container is updated by docker when using the --add-host parameter or extra_hosts in docker-compose. You can add individual records by using extra_hosts in docker-compose (https://docs.docker.com/compose/compose-file/#extra_hosts).
Although if you're just trying to get 2 containers talking to each other you can alternatively connect them to the same network. In docker-compose you can create what's called an external network and have all your docker-compose files reference it. you will then be able to connect by using the full docker container name (eg. http://project_app_1:3000).
See https://docs.docker.com/compose/compose-file/#external

Set hostname of service container to host's hostname

I run a docker swarm with docker swarm mode. Let's say I have 4 nodes, 1 manager, 3 worker. The hostnames are:
manager0
worker0
worker1
worker2
I start the service in global mode, so every node runs the service once.
Let's say the command looks like this:
docker service create --name myservice --mode global --network mynetwork ubuntu wait 3600
mynetwork is an overlay network.
Now I am trying to access the hostname of the docker host in the containers, so I can pass the hostname to an application in the container.
I tried to pass the hostname with the environment variables (--env hostname=$(hostname)), but actually ${hostname} is only executed on the manager and the hostname is set to manager0 for all nodes.
Is there a way to access the hostname or pass the hostname to the containers?
You can use latest naming templates to create service with hostname.
Here is the feature request, that has been implemented in docker version 17.10
https://github.com/moby/moby/issues/30966

docker-compose.yml to start containers on multiple VM's

I have created Docker containers using docker-compose.yml on a single host.
Can anybody tell if docker-compose.yml file can be used to start Docker containers on multiple VMs ? If yes, how?
The compose file cannot be used with the new Docker "Swarm Mode" introduced in June (Docker 1.12). The "legacy" Docker Swarm accepts compose files but you should really focus on learning Docker "Swarm Mode", not the old Docker Swarm. It's much simpler too, except for the missing support for compose files.
"swarm mode" accepts dab files and there is a way to convert compose files to dab, but it's experimental (which means that a lot of what you have put in your compose file won't translate). So the current best way is to create bash scripts with the CLI commands eg: docker service create --name nginx nginx:1.10-alpine.
and do have look at #Matt 's link about learning the basics of Docker swarm mode. http://docs.docker.com/engine/swarm/key-concepts
You can quickly spin up a stack swarm from the external containers and a node list (by ip like 10.0.0.1 or hostname like nodeb):
docker run -d -P --restart=always --name swarm-manager swarm manager \
"nodes://10.0.0.1:2376,nodeb:2376,nodec:2376"
export DOCKER_HOST=$(docker port swarm-manager 2375)
docker-compose up
Before running this, you'd need to configure the engines to listen on 2376 with TLS configured, a client key/certificate, and the appropriate network access. See docker's documentation on TLS for more details on configuring this.

Resources