Quite often, when I start my docker-composed app, I like to check that everything started correctly and everything's fine.
So I do docker-compose up, look at the logs, and then I have to do docker-compose stop, and docker-compose -d up.
Those are too many steps and having to stop the container means downtime on my server.
Ain't there a way to send docker to the background?
I tried Ctrl+Z but then if I try to exit the ssh session, I get There are stopped jobs., so that's not the correct way to do this.
I use docker-compose, but I'd be curious if this is possible with docker also.
Thanks
After Ctrl+z, just use bg, the task will start running on background and you are safe to close the ssh session.
Related
I have an EC2 instance running a dockerized application using docker-compose.
Every time I run docker-compose up, many days' worth of logs to stdout for all services. This means that I have to wait up to an hour before all the logs have been printed and I start seeing recent logs.
Any ideas?
Your problem is that the old containers created by docker-compose are re-used.
Starting with docker-compose up --force-recreate should do the trick.
Though I remember this from the past, and for me, this problem no longer happens. So it could also be something else.
Please make sure the following:
You are using a modern version of docker-compose (I am running 1.29, run docker-compose version)
Please make sure the containers you are starting to are not already running (docker-compose ps), as then you will attach to them instead of starting them and then printing all the logs in the container is common.
I have correctly deployed a Docker container which runs a Python script that grabs some data from the internet and slaps it in BigQuery. The container works well on my machine and on a GCE instance that I've provisioned.
Now, everything works well for the most part but I am failing to understand why the docker container always restarts after exiting (apparently correctly). Logs, in this case, seems to be fairly useless as there is no error whatsoever. My current hunch is that something is failing silently, forcing the instance to restart.
Is there any way to find out the reboot reason for a given Docker container?
Things tried so far
I've tried to print the exit code of the container in the following way. The result is always 0, no matter those restart cycles.
while true
do
docker inspect my_container --format='{{.State.ExitCode}}'
sleep 1
done
The Google Cloud documentation provides you different ways in which you can review your container related logs including container starts and stops.
In any way, I think there is no problem with your container: by default Compute Engine will restart a container on exit, although you can specify a different restart policy if you need to. Please, see the relevant documentation.
I have a python script that launches multiple programs in different screen sessions (gnu screen: terminal multiplexer) and exits.
The screen sessions keep running in the background.
Everything works fine when I run the script directly on my host machine.
Now, I dockerized the entire thing. But docker containers exit after the script finishes.
docker run -d myimage
After reading a bit, I realized that this is the normal behavior of docker. if no foreground process is running it will exit.
But I want to keep the docker container up, what should I do?
I saw, that running with -it flag, keeps it up. But I want to detach from the container, yet keep it up.
I think I have an understanding of this but just would like some clarification.
I have a docker-compose file with all my services in it. Did a docker-compose up and everything is fine. One of my services is a worker that needs to be restarted whenever my files change. For now I do a bind-mount from my host to the container. When I make some changes on my local system and then restart the worker container and it should pick up the changes.
If I do docker-compose restart , then it works and my changes are picked up.
If I do docker restart , then it seems to just cache the old environment, the files my worker runs are the "old" ones even though I can see the file changed when I ssh into the container.
I'm guessing it has something to do with docker-compose reloading configs or something? For now I'm just going continue to use docker-compose restart but I'd like a better understanding of what's going on.
Thanks for any help.
I have a deployed application running inside a Docker container, which is, in effect, an websocket client that runs forever. Every deploy I'm rebuilding the container and starting it with docker run using the command set in the Dockerfile.
Now, I've noticed a few times that the process occasionally dies without restarting. When running docker ps, I can see that the container is up, and has been up for 2 weeks, however the process running inside of it has died without the host being any the wiser
Do I need to go so far as to have a process manager inside of the docker container to manage the containerized process?
EDIT:
Dockerfile: https://github.com/DVG/catpen-edi/blob/master/Dockerfile
We've developed a process-manager tailor-made for Docker containers and have been using it with quite a bit of success to solve exactly the problem you describe. The best starting point is to take a look at chaperone-docker on github. The readme on the first page contains a quick link to a minimal base image as well as a fully configured LAMP stack so you can try it out and see what a fully-configured image would look like. It's open-source and fully documented.
This is a very interesting problem here related to PID1 and the fact that docker replaces PID1 with the command specified in CMD or ENTRYPOINT. What's happening is that the child process isn't automagically adopted by anything if the parent dies and it becomes an orphan (since there is no PID1 in the sense of a traditional init system like you're used to). Here is some excellent reading to give you a few ideas. You may get some mileage out of their baseimage-docker image which comes with their simplified init system ("my_app"), which will solve some of this problem for you. However, I would strongly caution you against automatically adopting the Phusion mindset for all of your containers, as there exists some ideological friction in that space. I can't recall any discussion on Docker's Github about a potential minimal init system to solve this problem, but I can't imagine it will be a problem forever. Good luck!
If you have two ruby processes it sounds like the child hasn't exited, the application has just stopped working. It's likely the EventMachine reactor is sitting in the background.
Does the EDI app really need to spawn the additional Ruby process? This only adds another layer between Docker and your app. Run the server directly with CMD [ "ruby", "boot.rb" ]. If you find the problem still occurs with a single process then you will need to find what is causing your app to hang.
When a process is running as PID 1 is docker it will need handle the SIGINT and SIGTERM signals too.
# Trap ^C
Signal.trap("INT") {
shut_down
exit
}
# Trap `Kill `
Signal.trap("TERM") {
shut_down
exit
}
Docker also has restart policies for when the container does actually die.
docker run --restart=always
no
Do not automatically restart the container when it exits. This is
the default.
on-failure[:max-retries]
Restart only if the container
exits with a non-zero exit status. Optionally, limit the number of
restart retries the Docker daemon attempts.
always
Always restart the
container regardless of the exit status. When you specify always, the
Docker daemon will try to restart the container indefinitely. The
container will also always start on daemon startup, regardless of the
current state of the container.
unless-stopped
Always restart the
container regardless of the exit status, but do not start it on daemon
startup if the container has been put to a stopped state before.