I've got a node docker container that starts my app with nodemon.
What I would like to do, is access that container and somehow view nodemon console log.
I can access the container shell with docker exec -ti <container id> bash, ps aux tells me that nodemon is running my app, but I couldn't find any documentation about accessing the output of nodemon while it's running.
Should I forward output to a file when starting nodemon or should the log be accessible in some other way?
You can use docker logs -f cid. This fill give you system output and system error streams from the container cid
Related
I have a docker service/image I'm using which restarts as soon as starts.
I'm unable to fix the issue by getting into the container using
docker exec -it CONTAIER_NAME
since it restarts/terminates as soon as it boots.
Is there anyway I can pause it directly? I can't rebuild the image as I don't have access to the internet on the server. (Yes I'm sure the rebuild or build--no-cache will fix the issue)
The issue should be easily fixable if I modify permissions for a certain folder, but I'm not sure how to do this inside the container when I can't access it. The image doesn't have a docker file and is used directly from the docker hub.
If we do not get any information from the container's logs, we have the option to start the process "manually". For this, we start the container with an interactive terminal (-it, -i to keep STDIN open, -t to open a pseudo-TTY) and override the entrypoint to be a shell, e.g. bash. For good measure, we want the container to be removed when it terminates (i.e. when we exit the termainal, --rm):
docker run ... -it --rm --entrypoint /bin/bash
Once inside the container, we can start the process that would have normally started through the entrypoint from the container's terminal and extract error information from here.
I use following command to build web server
docker run --name webapp -p 8080:4000 mypyweb
When it stopped and I want to restart, I always use:
sudo docker start webapp && sudo docker exec -it webapp bash
But I can't see the server state as the first time:
Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd
Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00
* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)
How can I see the state instead of interacting with the shell?
When you use docker run, the default behavior is to run the container detached. This runs in the background and is detached from your shell's stdin/out.
To run the container in the foreground and connected to stdin/out:
docker run --interactive --tty --publish=8080:4000 mypyweb
To docker start a container, similarly:
docker start --interactive --attach [CONTAINER]
NB --attach rather than -tty
You may list (all add --all) running containers:
docker container ls
E.g. I ran Nginx:
CONTAINER ID IMAGE PORTS NAMES
7cc4b4e1cfd6 nginx 0.0.0.0:8888->80/tcp nostalgic_thompson
NB You may use the NAME or any uniquely identifiable subset of the ID to reference the container
Then:
docker stop nostalgic_thompson
docker start --interative --attach 7cc4
You may check the container's logs (when running detached or from another shell) by grabbing the container's ID or NAMES
docker logs nostalgic_thompson
docker logs 7cc4
HTH!
Using docker exec is causing the shell to attach to the container. If you are comparing the behavior of docker run versus docker start, they behave differently, and it is confusing. Try this:
$ sudo docker start -a webapp
the -a flag tells docker to attach stdout/stderr and forward signals.
There are some other switches you can use with the start command (and a huge number for the run command). You can run docker [command] --help to get a summary of the options.
One other command that you might want to use is logs which will show the console output logs for a running container:
$ docker ps
[find the container ID]
$ docker logs [container ID]
If you think your container's misbehaving, it's often not wrong to just delete it and create a new one.
docker rm webapp
docker run --name webapp -p 8080:4000 mypyweb
Containers occasionally have more involved startup sequences and these can assume they're generally starting from a clean slate. It should also be extremely routine to delete and recreate a container; it's required for some basic tasks like upgrading the image underneath a container to a newer version or changing published ports or environment variables.
docker exec probably shouldn't be part of your core workflow, any more than you'd open a shell to interact with your Web browser. I generally don't tend to docker stop containers, except to immediately docker rm them.
I am running RabbitMQ in the docker container in detached mode. I am doing this so I can set some values using rabbitmqctl.
I added tail -f /dev/null so the container doesn't shutdown
However when I do this, I get no logging from the docker container.
How can I run rabbitmq-server -detached AND get logging to the "console"?
docker logs -f [container name or container ID]
will give you the container log. If rabbitmq logs to an specific file you can then do:
docker exec [container name or container ID] tail -f [PATH to the rabbot mb log file]
To get the container ID or name in case that you don't know it use:
docker ps
One alternative is to set RABBITMQ_LOG_BASE to a shared volume directory.
In your dockerfile, add:
ENV RABBITMQ_LOG_BASE="/var/log/foo"
Then, run the container with:
docker run -d -v /var/log/bar:/var/log/foo your_image
Then you can get the data directly in your host in the directory /var/log/bar.
I have a node docker container on which i'm running a dev server.
In my docker-compose.yml file, the entry command is :
...
command: start-dev-server
...
Where start-dev-server points to a script that starts the server after a vendor install :
// /usr/local/bin/start-dev-server
#!/usr/bin/env bash
# install node modules if missing
npm i
# start the dev server
npm run start
So when I start my container, the server will also start.
I know that I can access my container in bash via the following command :
docker exec -it my-container bash
But there I can't stop or restart my server.
Is there a way to access the ssh with the started command ? (to see the server logs for example, or to stop & restart it).
Maybe I take it by the wrong path here because the entry command isn't supposed to be stopped ? So in this case, would anyone has a solution that could allow me to start my server & control it in a more flexible way ?
The best practices says that you should see the container as your server. If you want to stop it, stop the container (docker stop my-container), if you want to restart it, restart the container (docker restart my-container). Your server should log to stdout, so you can see the logs using docker logs -f my-container. So, you're right, the command isn't supposed to be stopped, as it will stop the container.
I am using docker logs -f mycontainer to check the logs. If I restart mycontainer by docker rm -f mycontainer then docker run -d --name mycontainer, I need to use Ctrl-C then rerun the docker logs command to get the logs. I wonder if there is a better way for me to keep receiving the logs even after the container restarts.
As other commenters have mentioned, the "rm" command is destroying your container, not restarting it.
But to answer your question you could use something like this:
watch -n 0 "docker logs mycontainer"
The docker logs command doesn't keep running for a stopped container, but you can achieve a similar effect using the "watch" command. And since it's not a Docker command, it doesn't care if the container is running or not.
If you're on a Mac you might not have watch. It can be installed using pip. The same thing can be achieved with a one-liner bash script but I find watch to be much neater.
2 things
with docker rm -f mycontainer you are not stopping your container, you are killing it, then you start another brand new after
you can use docker stop mycontainer and start or simply docker restart mycontainer to keep logs.
Because container are stateless, you will lost logs if you delete your container. In that case, you have to use a volume where to write your application logs. They will be on the host instead of into your container.