On demand docker containers creation/deletion - docker

We want to scale docker containers horizontally based on user demand. Is there any docker api for on demand container creation/deletion, What can be the best approach for the use case given below.
Use case:
We've a service running inside a docker container which is directly accessible to a user.
Each user will be given separate container, So we need to create docker container whenever user requests for service, Also we need to delete containers when they're idle for specific time period.

I believe you are looking for the Docker Engine API, which you can find here.
There is an SDK for Go and Python and a RESTful API for all others.
In the docs you find examples of how to start and stop containers programmatically.

Related

Docker Swarm for managing headless containers, and keeping them updated (or watchtower?)

I've been trying to devise a strategy for using Docker Swarm for managing a bunch of headless containers - don't need load balancer, exposing any ports, or auto scaling.
The only thing I want is the ability to update all of the containers (on all nodes), if any of the images are updated. Each container running will need to have a specific --hostname.
Is running docker service even viable for this? Or should I just do a normal docker run targeting specific nodes to specify the --hostname i want? The reason I'm even asking about docker service is because it allows you to do an update (forcing an update for all containers if there are updated images).
Was also thinking that Docker Swarm would make it a bit easier to keep an eye on all the containers (i.e. manage them from a central location).
The other option I was looking at was watchtower, to run on each server that is running one of the containers, as an alternative to swarm. My only issue with this is that it doesn't provide any orchestration, for centralized management.
Anyone have any ideas of what would be a better option given the scenario?
Docker swarm does not give you any advantage regarding rolling updates apart from the docker service command, swarm only provides the user horizontal scaling and places a load balancer in front of those replicas called "service", as well as some other goodies such as replicating the docker events across the swarm nodes.
docker service --force would work as expected.
However, you should probably use both, docker swarm for orchestration and watchtower for rolling updates.

Microservice Discovery With Docker And Consul

I'm interested in building microservices, but I'm getting a bit stuck on how service discovery should work when I've got multiple instances of a single microservice.
Suppose I've got an "OCR" app that reads text from an image.
Deploying that as 1 instance is easy, however, what if I want 50 instances of those?
I can run docker swarm to spin up get those 50 instances, but how do I send a request to any one of them, i.e. I don't want to have to know the exact container name of a specific instance, I don't care which one I get, as long as it's healthy, just send my request to any of the "OCR" containers.
How do I achieve this?
I've been looking into Consul and it seems very promising.
I especially like the HTTP api, (Although I'm a little unsure of how I would retrieve the url for the service I'm interested in. Would I need to do it before every request to make sure I'm pointing to a healthy instance?).
If I wanted to use consul, what would be the steps be in relation to docker swarm? Do I just need to register the service in consul when the container starts up, and it will automatically get de-registered if it fails right?).
After that, all of my containers just need to be aware of where consul is (And I guess I could stick a load balancer infront of it, incase I ever want to scale out consul itself to a bunch of instances?)
Please let me know if I'm going completely in the wrong direction.
If anyone could also suggest any articles or books on this topic I'd appreciate it.
Thanks.
When you're using Docker Swarm Mode, you get service discovery with load balancing for free.
DNSRR is in the key concept: https://docs.docker.com/engine/swarm/key-concepts/#load-balancing
Say you deploy OCR-app.
docker service create --network dev --name ORC-app --replicas 5 OCR-app:latest
The docker manager will deploy OCR-app in this case five times on nodes of your swarm network. Every other service which is part of the same docker network dev can request the OCR-app by it's name. E.g. GET http://OCR-app:4000/do/something.
Internally docker swarm uses round robin for forward the request automatically to one of the five services.

Combining Chef And Docker

I am having hard time figuring how I should combine Chef and Docker to get the best of them.
Right now I am using Chef to automatically pull a docker image and create a container.
But things get messy when I want to change the configuration inside the container.
I read about knife container but I didn't understand how one can bootstrap a container and a new vm (on Amazon for example) all together.
I would suggest that if all you want to do is manage Docker images/containers, that you don't really need Chef.
Docker provides tools like:
Fig (http://www.fig.sh/), which brings up multiple containers as one logical unit.
Swarm (https://github.com/docker/swarm/), which allows you to abstract away the machines you have for deployments. For example, "My app needs 2GB of RAM, 1 CPU, 10GB of HD, which machine has available resources?"
Machine (https://github.com/docker/machine), which allows you to create VMs in the cloud in pretty much any provider.
A REST API (https://docs.docker.com/reference/api/docker_remote_api/), which allows you to remotely start/stop containers etc.
In my opinion those suite of tools replace the need for Chef if all you're going to do is manage Docker images and containers.
As someone already noted, don't change configs after a container has started. Better to make a new image or restart the container. You could also mount the configs external to the container and modify them there, then restart the container.

How do I do docker clustering or hot copy a docker container?

Is it possible to hotcopy a docker container? or some sort of clustering with docker for HA purposes?
Can someone simplify this?
How to scale Docker containers in production
Docker containers are not designed to be VMs and are not really meant for hot-copies. Instead you should define your container such that it has a well-known start state. If the container goes down the alternate should start from the well-known start state. If you need to keep track of state that the container generates at run time this has to be done externally to docker.
One option is to use volumes to mount the state (files) on to the host filesystem. Then use RAID, NTFS or any other means, to share that file system with other physical nodes. Then you can mount the same files on to a second docker container on a second host with the same state.
Depending on what you are running in your containers you can also have to state sharing inside your containers for example using mongo replication sets. To reiterate though containers are not as of yet designed to be migrated with runtime state.
There is a variety of technologies around Docker that could help, depending on what you need HA-wise.
If you simply wish to start a stateless service container on different host, you need a network overlay, such as weave.
If you wish to replicate data across for something like database failover, you need a storage solution, such as Flocker.
If you want to run multiple services and have load-balancing and forget on which host each container runs, given that X instances are up, then Kubernetes is the kind of tool you need.
It is possible to make many Docker-related tools work together, we have a few stories on our blog already.

Is it wrong to run a single process in docker without providing basic system services?

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH

Resources