docker logs multiple containers - docker

I have a requirement where data gets loaded into the system using load balanced docker containers. We have multiple images of docker running on different instances either of which picks up the job and load data. The data loading process is reflected in "docker logs -f", but I need to check all the container logs manually and then figure out which container is active and loading data to tail for logs.
Is there a way though which I check which docker is active and tail it for logs. Or may be merge all the container logs into a single and then tail it.
I have good understanding on shell so any pointers would be helpful.
Please let me know if further details needed.
Thanks

Assuming you use docker compose you should be able to do docker service logs to see logs from any container making up a service.

Related

Make Docker Logs Persistent

I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.

Docker container restarting in a loop on GCE

I have correctly deployed a Docker container which runs a Python script that grabs some data from the internet and slaps it in BigQuery. The container works well on my machine and on a GCE instance that I've provisioned.
Now, everything works well for the most part but I am failing to understand why the docker container always restarts after exiting (apparently correctly). Logs, in this case, seems to be fairly useless as there is no error whatsoever. My current hunch is that something is failing silently, forcing the instance to restart.
Is there any way to find out the reboot reason for a given Docker container?
Things tried so far
I've tried to print the exit code of the container in the following way. The result is always 0, no matter those restart cycles.
while true
do
docker inspect my_container --format='{{.State.ExitCode}}'
sleep 1
done
The Google Cloud documentation provides you different ways in which you can review your container related logs including container starts and stops.
In any way, I think there is no problem with your container: by default Compute Engine will restart a container on exit, although you can specify a different restart policy if you need to. Please, see the relevant documentation.

How to get docker system ID inside container?

As far as I understand each docker installation has some kind of a unique ID. I can see it by executing docker system info:
$ docker system info
// ... a lot of output
ID: UJ6H:T6KC:YRIL:SIDL:5DUW:M66Y:L65K:FI2F:MUE4:75WX:BS3N:ASVK
// ... a lot of output
The question is if it's possible to get this ID from the container (by executing a code inside container) w/o mapping any volumes, etc?
Edit:
Just to clarify the use case (based on the comments): we're sending telemetry data from docker containers to our backend. We need to identify which containers are sharing the same host. This ID would help us to achieve this goal (it's kind of machine id). If there's any other way to identify the host - it can solve the issue as well.
No - unless you explicitly inject that information in the container(volumes, COPY, environment variable, ARG passed at build time and persisted in a file etc), or you fetch it via a GET request for example, that information is not available inside the docker containers.
You may open a console inside a container and search for all files that contain that ID grep -rnw '/' -e 'the-ID' but nothing will match the search.
On the other hand, any breakout from the container to the host would be a real security concern.
Edit to answer the update on you question:
The docker host has visibility on the containers that are running. A much better approach would be to send the information you need from the host level rather than container level.
You could still send data directly from the containers and use the container ID, which is known inside the container and correlate these telemetry information to the data sent from the docker host.
Yet another option, which is even better in my opinion, is to send that telemetry data to the stdout of the container. This info can easily be collected and send to the telemetry backend on the docker host, from the logging driver.
Often the hostname of the container is the container ID--not the ID you're talking about, but the ID you would use for e.g. docker container exec, so it's a fine identifier.

docker swarm services stuck in preparing

I have a swarm stack deployed and I removed couple services from the stack and tried to deploy them again. these services are showing with desired state remove and current state preparing .. also their name got changed from the custom service name to a random docker name. swarm also trying to start these services which are also stuck in preparing. I ran docker system prune on all nodes and them removed the stack. all the services in the stack are not existent anymore except for the random ones. now I cant delete them and they still in preparing state. the services are not running anywhere in the swarm but I want to know if there is a way to remove them.
I had the same problem. Later I found that the current state, 'Preparing' indicates that docker is trying to pull images from docker hub. But there is no clear indicator in docker service logs <serviceName> available in the docker-compose-version above '3.1'.
But it sometimes imposes the latency due to n\w bandwidth or other docker internal reasons.
Hope it helps! I will update the answer if I find more relevant information.
P.S. I identified that docker stack deploy -c <your-compose-file> <appGroupName> is not stuck when switching the command to docker-compose up. For me, it took 20+ minutes to download my image for some reasons.
So, it proves that there is no open issues with docker stack deploy,
Adding reference from Christian to club and complete this answer.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine. Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine. There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation. In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Resources