I am running RabbitMQ in the docker container in detached mode. I am doing this so I can set some values using rabbitmqctl.
I added tail -f /dev/null so the container doesn't shutdown
However when I do this, I get no logging from the docker container.
How can I run rabbitmq-server -detached AND get logging to the "console"?
docker logs -f [container name or container ID]
will give you the container log. If rabbitmq logs to an specific file you can then do:
docker exec [container name or container ID] tail -f [PATH to the rabbot mb log file]
To get the container ID or name in case that you don't know it use:
docker ps
One alternative is to set RABBITMQ_LOG_BASE to a shared volume directory.
In your dockerfile, add:
ENV RABBITMQ_LOG_BASE="/var/log/foo"
Then, run the container with:
docker run -d -v /var/log/bar:/var/log/foo your_image
Then you can get the data directly in your host in the directory /var/log/bar.
Related
I use following command to build web server
docker run --name webapp -p 8080:4000 mypyweb
When it stopped and I want to restart, I always use:
sudo docker start webapp && sudo docker exec -it webapp bash
But I can't see the server state as the first time:
Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd
Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00
* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)
How can I see the state instead of interacting with the shell?
When you use docker run, the default behavior is to run the container detached. This runs in the background and is detached from your shell's stdin/out.
To run the container in the foreground and connected to stdin/out:
docker run --interactive --tty --publish=8080:4000 mypyweb
To docker start a container, similarly:
docker start --interactive --attach [CONTAINER]
NB --attach rather than -tty
You may list (all add --all) running containers:
docker container ls
E.g. I ran Nginx:
CONTAINER ID IMAGE PORTS NAMES
7cc4b4e1cfd6 nginx 0.0.0.0:8888->80/tcp nostalgic_thompson
NB You may use the NAME or any uniquely identifiable subset of the ID to reference the container
Then:
docker stop nostalgic_thompson
docker start --interative --attach 7cc4
You may check the container's logs (when running detached or from another shell) by grabbing the container's ID or NAMES
docker logs nostalgic_thompson
docker logs 7cc4
HTH!
Using docker exec is causing the shell to attach to the container. If you are comparing the behavior of docker run versus docker start, they behave differently, and it is confusing. Try this:
$ sudo docker start -a webapp
the -a flag tells docker to attach stdout/stderr and forward signals.
There are some other switches you can use with the start command (and a huge number for the run command). You can run docker [command] --help to get a summary of the options.
One other command that you might want to use is logs which will show the console output logs for a running container:
$ docker ps
[find the container ID]
$ docker logs [container ID]
If you think your container's misbehaving, it's often not wrong to just delete it and create a new one.
docker rm webapp
docker run --name webapp -p 8080:4000 mypyweb
Containers occasionally have more involved startup sequences and these can assume they're generally starting from a clean slate. It should also be extremely routine to delete and recreate a container; it's required for some basic tasks like upgrading the image underneath a container to a newer version or changing published ports or environment variables.
docker exec probably shouldn't be part of your core workflow, any more than you'd open a shell to interact with your Web browser. I generally don't tend to docker stop containers, except to immediately docker rm them.
I want to run an image which I have already created and uploaded on the docker hub. Is it possible to run that image on lxc/lxd? Basically I want to do performance comparison between docker and lxc.
I have installed skopeo, umoci, go-md2man and jq.
Now, when I try to run the command lxc-create c1 -t oci – --url docker://awaisaz/test:part2
it gives trust policy error. /etc/containers/policy.json not such file or directory
Can anyone suggest me a solution or alternate way to do this?
So, you want to run a docker container inside an LXC Container.
firstly, you need to make docker process up and running inside an lxc container.
sudo lxc config edit <lxc-container-name>
In Config Object, Add
linux.kernel_modules: overlay,ip_tables
security.nesting: true
security.privileged: true
Then Exit from that YAML File, And Restart the LXC Container
sudo lxc restart <container_name>
After Successfull restart of LXC Container.
exec into that container by
sudo lxc exec <container_name> /bin/bash
Then,
sudo rm /var/lib/docker/network/files/local-kv.db
Restart Docker Service,
service docker restart (In LXC Container)
Then you can use docker process in LXC Container as if you are in a VM.
docker ps is giving me a different output compared to docker-compose ps.
For example
docker ps
is not showing the same containers as
docker-compose ps
and vice-versa.
What is the reason for this?
I was thinking docker-compose is working on top of docker.
docker ps lists all running containers in docker engine. docker-compose ps lists containers related to images declared in docker-compose file.
The result of docker-compose ps is a subset of the result of docker ps.
docker ps - lists all running containers in Docker engine.
docker-compose ps - lists containers for the given docker compose configuration. The result will depend on configuration and parameters passed to docker-compose command.
Example
Start the containers with the following command:
docker-compose -p prod up -d
(-p in the command above defines the project name)
Running docker-compose ps won't list containers since the project name parameters is not passed:
docker-compose ps
Name Command State Ports
------------------------------
Running docker-compose -p prod ps will list all containers:
Name Command State Ports
--------------------------------------------------------------------------------------------------------
dev_app_1 sh -c exec java ${JAVA_OPT ... Up 0.0.0.0:5005->5005/tcp, 0.0.0.0:9001->9000/tcp
dev_database_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
dev_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:8443->443/tcp, 0.0.0.0:8080->80/tcp
dev_pgadmin_1 /entrypoint.sh Up 443/tcp, 0.0.0.0:9100->80/tcp
The same goes if you define for example docker compose files with -f parameter.
I've got a node docker container that starts my app with nodemon.
What I would like to do, is access that container and somehow view nodemon console log.
I can access the container shell with docker exec -ti <container id> bash, ps aux tells me that nodemon is running my app, but I couldn't find any documentation about accessing the output of nodemon while it's running.
Should I forward output to a file when starting nodemon or should the log be accessible in some other way?
You can use docker logs -f cid. This fill give you system output and system error streams from the container cid
Playing with ELK and docker, I needed to restart every services.
docker ps told me that I haven't any containers up.
docker run -it --rm [...] --name es elasticsearch -> Error response from daemon. The name "es" is already use by container [...]
So I try to remove all container :
docker ps -a -q | xargs docker rm -> Cannot connect to the Docker daemon. Is the docker daemon running on this host?
The container is not up but still here.
Of course I can simply change my container's name but it's not right. That mean I have container running. Even if I restart my server.
Any idea ?
When you stop your container it's not getting removed by default, unless you're providing --rm flag. So, it could be so, like you have started and stopped some container with es name before and it's stopped now. But it's not possible to create a new container with the existing name, even if the existing one is not running. Try to use a -a flag to show all containers you have as:
docker ps -a
If you have some with the name es, just remove it manually with:
docker rm es
You also able to provide -f flag, to force removing the es container even if it's running.
docker rm es should do the trick. Furthermore, if you want to remove a running container, you can add the -f parameter(docker rm -f 'container_name')