Accessing individual containers running in a Docker Swarm Service - docker

I expose jmx and a custom management port on my services that run on the jvm. Both ports are exposed to specific ports in the container and dynamically allocated to a random port at the Swarm service level.
This works fine but it means I can only access one container at random in each service.
I’m thinking the solution to access all the containers individually is going to involve a non-docker solution. Probably some proxy service that my containers can register with and then outside the swarm we can access through that.
Before I build it, anyone have any existing solutions or a native docker way to achieve this?

Related

Docker Container Containing Multiple Services

I am trying to build a container containing 3 applications, for example:
Grafana;
Node-RED;
NGINX.
So I will just need to expose one por, for example:
NGINX reverse proxy on port 3001/grafana redirects to grafana on port 3000 and;
NGINX reverse proxy on port 3001/nodered redirects to nodered on port 1880.
Does it make any sense in your vision? Or this architecture is not feasible if compared to docker compose?
If I understand correctly, your concern is about opening only one port publicly.
For this, you would be better off building 3 separate containers, each with their own service, and all in the same docker network. You could plug your services like you described within the virtual network instead of within the same container.
Why ? Because containers are specifically designed to hold the environment for a single application, in order to provide isolation and reduce compatibility issues, with all the network configuration done at a higher level, outside of the containers.
Having all your services inside the same container thwart these mentioned advantages of containerized applications. It's almost like you're not even using containers.

Figuring out the IP address of a service for dockerized Consul

I am building a microservices based application and would like to use Consul as service registry. All in all I have three scenarios:
All the services run on the host.
All the services run on the host, but Consul runs in Docker.
All the services and Consul run in Docker.
Now I have the problem of how to register the services with their IP address, because I need to figure out their IP address so that it is reachable by Consul (e.g., for the health checks):
If everything runs on the same host, it's pretty easy: Simply use 127.0.0.1, and you're done.
If everything (including Consul) runs in Docker, I could use hostname -i from within the Docker containers to figure out their external IP and hand it over to Consul. This works, but I wonder if there is a better way to solve this? (Ideally, the solution should also work in the same way on Kubernetes.)
If the services run on the host, but Consul runs in Docker, right now I am missing any idea at all. Basically, Consul requires the host's IP address to be able to talk to the services, but I can only detect this from within the Consul container (by resolving host.docker.internal). But first, this does not work from externally, and second it only works for Docker for Mac / Windows, not e.g. with Kubernetes.
How could I solve these issues?
PS: I would like to avoid using a container such as registrator by Gliderlabs, since I have doubts how well this works on Kubernetes, and also it won't help with the mixed Docker / host scenario.
If you're using Kubernetes, you might start by checking whether its built-in service registry meets your needs. There's generally not a direct path to reach a pod via its node's host's IP address, so the setup you describe won't really work well. (I might consider Consul for a key/value store but I wouldn't reach for it as a service registry in Kubernetes land.)
In plain multi-host Docker land, this is one of the few situations I've found where host networking is appropriate. Start Consul with --net host or an equivalent option in Docker Compose or another orchestration tool. Then Consul will believe "its" IP address is the host's, and if you have automated TCP probes of well-known ports, you can search every service that's running on the host and discover e.g. a MySQL service on port 3306, whether running in a container or natively on the host.
With this setup, servicename.service.consul will resolve to some physical-host IP address. If you have a Docker container pointing at its current host for DNS service, then that will route a service to some host, maybe the same one, but this has worked reliably for me in the past.
Note that the relevant hostnames will be different in different environments: servicename.service.consul for a Consul-based setup, servicename.namespacename.svc.cluster.local in Kubernetes, maybe localhost in a developer-desktop environment. You need to make sure this is configurable, most straightforwardly via an environment variable.

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

How can I expose kubernetes services running within docker?

What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
Ensure your nodes have externally reachable IP addresses.
Then create a service of type NodePort:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And direct traffic to nodes at the allocated port.

Docker host information and cluster

I am setting up a simple cluster using docker on several hosts. Before using docker the processes were simply started with a argument giving the address to a config server. The first thing each process does is to connect to the config server, get the addresses (host and port) of all the other services as well as register itself with host (and several different ports, one for each the services it provides).
However, it does not seem to be possible to dockerize this workflow? Since a process in a container seems not to be able to get the address and ports on the host (based on for example How to get the IP address of the docker host from inside a docker container) it does not know what to register itself as. Is this really not possible?
If not, are there any alternative ways this sort of setup is intended to be run using docker?

Resources