I have a docker container that I want to deploy to a CoreOS cluster that has to download my app from a git repo.
Let's say the app container runs nginx / nodejs
How should I update it?
If i submit the container and start it, that works the first time. But the second time I'll have to stop/start the container with fleetctl then I'll obviously have downtime. Should I start up new containers that are derived from that container?
Here's a complete walkthrough on exactly such a scenario:
http://coreos.com/blog/zero-downtime-frontend-deploys-vulcand.html
Instead of pulling down your application from github inside your container, you should bake your application code inside your container/image. Your container should start its services within a few seconds. To achieve zero downtime you should keep the old container running until your new container has started and is ready to accept new connections. You could do this by separating nginx into its own container and keep it running all the time.
Related
I'm working with a Docker container with Debian 11 inside and a server.
I need to update this server and do other things on regular manne. I've written several scripts that can do it, but I encountered serious proble.
If I want to update the server and other packages I need to reboot the container.
I'm obviously able to do so from the computer Docker is installed on (in my case Docker Desktop running with WSL2 on Windows 10), I can reboot the container easily, but I need to automate it.
The simplest way will be to add the shutdown command to the scripts I've written. I was reading about it, but found nothing. Is there any way to reboot this container from the Debian inside it? If no, how can it be achieved and how complicated is it?
I was trying to invoke standard Linux commands to shutdown or reboot system on Debian inside container.
I expect a guide if it's possible and worth efforts.
The only way to trigger a restart of a container from within the container is to first set a restart policy on the container such as --restart=on-failure and then simply stop the container, i.e., let the main process terminate itself. The Docker engine would then restart the container.
This, however, is not the way Docker is intended to be used! Docker containers are not VMs and instead are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
This means, you shouldn't be updating the server within a running container but instead should update/rebuild the image and start a new container from it!
We have a docker swarm and we normally run service container using Docker create service API. Now we are seeing after certain time interval the services are not responding ( means the application running inside container ). As of now the solution looks like restarting the service after specific time interval.And it worked when we tried it manually .
This is the top command output of Host worker node
And output of docker stats
Wanted to know what is the best approach to fix it. Also can we automate the solution.
I have a bunch of docker containers running in swarm mode (services). If the whole server restarts then containers start to run one by one after server reboot. Is there a way to set an order of container creation and run?
P.S. I can't user docker-compose as these services were created dynamically through Docker Remote API.
You can try to set a shorter restart delay (with --restart-delay) to the services you want to start firstly and a bigger to next one etc..
But I am not sure that working.
I am deploying my app via docker-compose, that have 2 services: server app and nginx.
So in CI I created following script of instructions:
docker-compose build # create new containers
docker-compose down # down old containers
docker-compose up -d # up new containers
But server app container has its own start up time, so right after app start I see 502 page, because server app is not yet ready to receive calls, but nginx is ready.
What I want to do is to preserve old containers running, during that build and up new containers, wait some time for server app to be ready and then somehow substitute them. So whole operation whould be seamless for users.
How can I do it?
It is not possible with docker-compose. But, yes it is possible with container orchestration tools like Kubernetes, the popular orchestration tools.
From Kubernetes official site :
Automated rollouts and rollbacks
Kubernetes progressively rolls out changes to your application or its
configuration, while monitoring application health to ensure it
doesn't kill all your instances at the same time. If something goes
wrong, Kubernetes will rollback the change for you. Take advantage of
a growing ecosystem of deployment solutions.
Self-healing
Restarts containers that fail, replaces and reschedules containers
when nodes die, kills containers that don't respond to your
user-defined health check, and doesn't advertise them to clients until
they are ready to serve.
Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.