I want to collect docker container logs, By default, log files will be deleted when removing container. It cause several logs lost each time i update my service. How to keep log files after removing containers?
Or, Is there another way to collect all logs from containers without losing?
There will be two situations:
If your logs are the stdout or stderr, you can save them before removing the container:
docker logs CONTAINER_ID > container.log
If your logs are stored in some files, in this case, you can copy them out or mount a directory for them while running the container:
# Copy the logs out to the host
docker copy CONTAINER_ID:/path/to/your/log_file /host/path/to/store
# Mount a directory for them
docker run -d \
-v /host/path/to/store/logs:/container/path/stored/logs \
your-image
Related
I need to look up into docker logs for some days ago and checking by docker service logs SERVICE | grep WHAT_I_NEED takes forever so I want to download the container logs from docker swarm and check those locally. I found that the container logs in Swarm can be found by:
docker inspect --format='{{.LogPath}}' $INSTANCE_ID
but I can't find a way to download the log from the location.
Doing: docker cp CONTAINER_ID:/var/lib/docker/containers/ABC/ABC-json.log ./ tells me that the path is not present. I understand that this path is in Swarm but then how to get the log from the container itself? Or is there another way to copy this file directly to a local file?
Try running this one from your terminal:
docker logs your_container_name 2> file.log
This will redirect the container logs to the local file file.log
After creating a docker image as follow:
PS> docker run -d -p 1433:1433 --name sql1 -v sql1data:C:/sqldata -e sa_password=MyPass123 -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
I stopped my container and copied a backup file into my volume:
PS> docker cp .\DataBase.bak sql1:C:\data
After that I can no longer start my container, the error message is as follows:
Error response from daemon: container 5fe22f4ac151d7fc42541b9ad2142206c67b43579ec6814209287dbd786287dc encountered an error during Start: failure in a Windows system call: Le système de calcul s’est fermé de façon inattendue. (0xc0370106)
Error: failed to start containers: sql1
I can start and stop any other container, the problem occurs only after copying the file into the volume.
I'm using windows containers
my docker version is 18.06.0-ce-win72 (19098)
The only workaround i found is to not copy any files into my container volume.
Seems like it's because of files ownership and permissions. When you make a backup with copying the files and use those files for a new Docker Container, MYSQL Daemon in your Docker Container finds that the ownership and permissions of it's files are changed.
I think the best thing to do is to create a raw MySQL Docker Container and see who is the owner of your backup files in that container (i guess it must be 1000). then change the owner of your backup files to that user id and then create a Container with Volumes mapped to your backup files.
I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background
I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.
I use this method below to port data out of one container.
docker run --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
But there is one problem with it is every time it performs a backup, there will be a zombie container left. What is the good way to get that id and remove the zombie container afterward.
Thanks
Apparently, you just want to be able to export volume data. To do that, you just need to start your initial container with a volume pointing to a directory on the host with the -v option. You can tar on the host without creating a container for it. Your current tactic seems a bit over-engineered ;)
The easy way to remove the container after executing the command, is to use the option --rm, from here
However, if you feel that the container you are creating will have data that you will need to
1. update in real time
2. access after the container has been created
then you may also mount a host directory as a container volume and access the contents of that directory from the host.
If you start a container using the -volume option, you can also call reference the directory created on this host
$ docker run -v /volume_directory ubuntu
$ container=$(docker ps -n=1 -q)
$ docker inspect -f '{{.Volumes}}' $container