Dynamically start new Docker containers when needed? - docker

Does exist any component that monitors usage of a server, of a resource or anther Docker instance(s) and starts new Docker containers when more resources are needed?
Docker containers may or may not be deployed on the same Server.
For example :
1) when a message queue grows too fast, other Docker containers that listen that queue are started to help consuming the messages.
2) when too many request are made to a server throug a load balancer, other docker instances are run..

What you are describing here is part of orchestration. Several tools exist for that, the best-known being Kubernetes and Marathon.

Related

Container captures ports - can I use some kind of router software?

I start a Docker container on my machine. It will have to talk to two services on remote nodes and for this it occupies ports 80 and 443.
I log into the container and start one task from the command line. The task does some initial data exchange with the remote nodes. Having done that, it starts a very long computation without any need to contact the remote nodes anymore.
I'd like to run that task more times in parallel, but I cannot start multiple instances of the container, because they would clash about the ports.
Is there some kind of software router that I can use to sort out this problem?
I know from here that I can run multiple shells on my container, but I am still curious about the possibility of using a software router to serve multiple containers.
NB: I am not using any docker-compose or kubernetes setting. Just plain, simple containers.

Direct requests only to one container of the docker swarm service

Is it possible to cause docker load balancer which uses round robin to direct requests only one container of global docker service deployed on multiple hosts? If this container goes down, requests will be forwarded to other running containers.
The only way i can think of is using external load balancer like nginx, but requires additional docker service.
You can acheive the same result by using replica mode and only having one replica of the container running. In this case you rely on Docker to ensure that an instance is always available.
Alternatively, the recommended way is to use an external load balancer. Check Use swarm mode routing mesh to see the different usages.

How to Distribute Jenkins Slave Containers Within Docker Swarm

I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.

Docker Engine Swarm: Access Network from "outside" swarm

I'm working with a Docker (1.12) Engine Swarm with two nodes running in a swarm "bundle":
redis
my node.js application
They are both connected through their own network: docker network create -d overlay antispam
I use redis as an in-memory database and I have another node.js application that is used to push new data to this database in - usually - one hour intervals. The data-pusher is a short-lived application that takes some text files, parses them and pushes their content to redis. There is no "schedule short-running containers" feature in Docker swarm, yet. So I need to run my application as a "normal" docker container and thus need to access redis from "outside" of swarm.
The obvious solutions would be:
... to make the redis port public, but that's also an ugly solution.
... to convert the short-running app into a long running one that usually just sleeps and just wakes up every now and then. Call me old-school, but I don't like to waste resources for just having a sleeping process.
... to use RabbitMQ or MySQL (available as services in my "non-docker" environment) as a data relay, write my data there and have the application in the swarm read it out again. That would add another wstep of work.
Is there any other option to access the network of those two containers from outside?

How could one use Docker Compose to synchronize container execution?

How could one use Docker Compose to synchronize container execution?
The problem I'm trying to solve is similar to Docker Compose wait for container X before starting Y. I use Docker Compose to launch several containers, all running on the same host, three of which are PostgreSQL, Liquibase, and a Web application servlet running in Tomcat. The PostgreSQL and Web application containers are both long running while the Liquibase container is ephemeral. The containers must not only start in order, but each container must also wait for the preceding container to be available or complete. In particular, the PostgreSQL server must be ready to process SQL commands before the Liquibase container runs, and the Liquibase schema migration task must complete before the Web application starts to ensure that the database schema is in a valid state.
I understand that I can achieve this synchronization using two wrapper "wait-for" scripts that poll for certain conditions (and this may be the only available option), the first of which would poll the availability of the PostgreSQL server to process commands while the second, which would run just prior to the Web application, could poll for the presence of a particular database object. However, like process synchronization, I think container synchronization is a common problem that can be addressed with more general inter-process communication and synchronization primitives like semaphores. Docker Compose would likely benefit the most from such synchronization mechanisms, but Docker containers might find them useful, too, for example, to establish multiple synchronization points within a container.
Until Docker Compose or Docker supports container synchronization primitives (similar to process synchronization primitives, but accessible from the shell), Dependencies for docker-compose with inotify is one of the better solutions that I've found to the Docker Compose container synchronization problem.
In addition to consul, etcd, and ZooKeeper, MQTT retained messages are another simple mechanism that Docker containers might use to coordinate activities. Mosquito is a lightweight, open-source implementation of MQTT.
I've come to the conclusion that Docker Compose is not the most appropriate tool for container synchronization. Tools like Kubernetes or Marathon facilitate more sophisticated container synchronization. What is the best Docker Linux Container orchestration tool? compares available container synchronization tools.

Resources