I am trying to test saving log files of docker containers in playing in this site which gives you a linux root shell with docker installed. I'v used solution provided here:
docker run -ti -v /dev/log:/root/data --name zizimongodb mongo
This is what I got in the console:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/dev/log\\\" to rootfs \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged\\\" at \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged/root/data\\\" caused \\\"permission denied\\\"\"".
But the container has started:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8adaa75ba6f7 mongo "docker-entrypoint..." 2 minutes ago Created zizimongodb
docker logs -f zizimongodb returns nothing. When I stop the container, nothing is saved in the /root/data. Any idea how I can correctly save all logs?
Since you are using the official mongo image from DockerHub, it is worth pointing out that this official image (like many--or all?--of the official images) does not send log output to their default log locations that you might expect if you download a Linux distro version of the same software.
Instead, most software that is capable of being told where to log, are forced to log to stdout/stderr so that docker log plugins and the docker log command itself work properly.
For the mongodb case you can see this somewhat complicated code here that tells the mongodb process to use the /proc filesystem file descriptor that maps to "stdout", as long as it is writeable when the container is started. Because of some bugs this is more complicated that other Dockerfile customization of log output (you can read more if interested at the links in the comments).
I think a more reasonable way to try and do some form of log consolidation or collection is to read about docker log drivers and see if any of those options works for you. For example, if you like journald there is a driver which will take all container logs and pass them to journald on the host.
Related
I use Dokku to run my app, and for some reason, the container is dying every few hours and recreates itself.
In order to investigate the issue, I am willing to read the error logs to this container and understand why it's crashing. Since Docker clears logs of dead containers, this is impossible.
I turned on docker events and it shows many events (like container update, container kill, container die, etc.) But no sign of what triggered this kill.
How can I investigate the issue?
Versions:
Docker version 19.03.13, build 4484c46d9d
dokku version 0.25.1
Logs are deleted when the container is deleted. If you want the logs to persist, then you need to avoid deleting the container. Make sure you aren't running the container with an option like --rm that automatically deletes it on exit. And check for the obvious issues like running out of disk space.
There are several things you can do to investigate the issue:
You can run the container in the foreground and allow it to log to your console.
If you were previously starting the container in the background with docker run -d (or docker-compose up -d), just remove the -d from the command line and allow the container to log to your terminal. When it crashes, you'll be able to see the most recent logs and scroll back to the limits of your terminal's history buffer.
You can even capture this output to a file using e.g. the script tool:
script -c 'docker run ...`
This will dump all the output to a file named typescript, although you can of course provide a different output name on the command line.
You can change the log driver.
You can configure your container to use a different logging driver. If you select something like syslog or journald, your container logs will be sent to the corrresponding service, and will continue to be available even after the container has been deleted.
I like use the journald logging driver because this allows searching for output by container id. For example, if I start a container like this:
docker run --log-driver journald --name web -p 8080:8080 -d docker.io/alpinelinux/darkhttpd
I can see logs from that container by running:
$ journalctl CONTAINER_NAME=web
Feb 25 20:50:04 docker 0bff1aec9b65[660]: darkhttpd/1.13, copyright (c) 2003-2021 Emil Mikulic.
These logs will persist even after the container exits.
(You can also search by container id instead of name by using CONTAINER_ID_FULL (the full id) or CONTAINER_ID (the short id), or even by image name with IMAGE_NAME.)
I have a linux vm on which I installed docker. I have several docker containers with the different programs I have to use. Here's my architecture:
Everything is working fine except for the red box.
What I am trying to do is to dynamically provide a jenkins docker-in-docker agent with the cloud functionality in order to build my docker images and push them to the docker registry I set up.
I have been looking for documentation to create a docker in docker container and I found this:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
This article states that in order to avoid problems with my main docker installation I have to create a volume:
-v /var/run/docker.sock:/var/run/docker.sock
I tested my image locally and I have no problem to run
docker run -d -v --name test /var/run/docker.sock:/var/run/docker.sock
docker exec -it test /bin/bash
docker run hello-world
The container is using the linux vm docker installation to build and run the docker images so everything is fine.
However, I face problems when it comes to the jenkins docker cloud configuration.
From what I gather, since the #826 build, the docker jenkins plugin has change its syntax for volumes.
This is the configuration I tried:
And the error message I have when trying to launch the agent:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"create
/var/run/docker.sock: \"/var/run/docker.sock\" includes invalid characters for a local
volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a
host directory, use absolute path"}
I also tried that configuration:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"invalid mount config for type \"volume\": invalid mount path: './var/run/docker.sock' mount path must be absolute"}
I do not get what that means as on my linux vm the docker.sock absolute path is /var/run/docker.sock, and it is the same path inside the docker in docker I ran locally...
I tried to check the source code to find what I did wrong but it's unclear what the code is doing for me (https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplateBase.java, from row 884 onward), I also tried with backslashes, etc. Nothing worked.
Has anyone any idea what is the expected syntax in that configuration panel for setting up a simple volume?
Change the configuration to this:
type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock
it is not a volume, it is a bind type.
This worked for me
type=bind,source=/sys/fs/cgroup,target=/sys/fs/cgroup,readonly
I understand there are many questions about how to read docker logs that are answered by:
$ docker logs containername
However, I am working with an ephemeral container, one created with -rm, so I do not have time to call logs after creating it. But I am still interested in seeing the logs of how it ran.
My command is:
docker run --name myname --rm python-my-script:3.7.4 - --myflags "myargs"
Now, I'd like to see how my script runs with these arguments. My entrypoint has a script that should effectively be reading in and printing "myargs" to the console.
But when I do:
docker logs myname
Error: No such container: myname
Or if I'm really quick:
Error response from daemon: can not get logs from container which is dead or marked for removal
How can I see the logs of a container that is no longer running? I'd prefer not to install something heavyweight like syslog.
The default logging driver for Docker is json-file, which you are able to use docker logs to see it. But if you delete the container or use --rm when run the container, the logs will be deleted after the container removed.
For you case, you need change a logging driver to assure the log still could be seen even after the container deleted.
There are a lots of logging driver which could meet your requirements, see this. E.g fluentd, splunk, etc.
Here, give a simplest way to reserve the log, use journald, a minimal example for your reference:
Start the container with journald log driver, need to set a container name which will be used later to retrieve the log:
$ docker run --log-driver=journald --rm --name=trial alpine echo "hello world"
After the container finish print "hello world", the container will be deleted as it specify --rm, check if docker logs ok:
$ docker logs trial
Error: No such container: trial
Use journald to if we can get the log:
$ journalctl CONTAINER_NAME=trial --all
-- Logs begin at Mon 2018-12-17 21:35:55 CST, end at Mon 2019-08-05 14:21:19 CST. --
Aug 05 14:18:26 shubuntu1 a475febe91c1[1975]: hello world
You can see you can use journalctl to get the log content "hello world" even the container was removed.
BTW, if you do not want to specify --log-driver every time you start a container, you can also set it as default log driver in daemon.json, see this:
{
"log-driver": "journald"
}
Meanwhile, you still can use docker logs to get the logs if the container not deleted.
I am using tomcat:9.0-jre8-alpine image to deploy my application. when i run the below command it works perfectly and displays logs.
docker logs -f <containername>
but after a few hours logs gets struck and whatever the operation we do on the application it does not display new logs. Container is running as expected and there is enough ram and disk space on the VM.
Note: I run the same container on 3 different VMs. Only 1 VM has this problem.
How can I debug/resolve the issue?
check you docker version, is it too old that you may meet
https://github.com/moby/moby/issues/35332 It's a dead lock caused by github.com/fsnotify/fsnotify pkg. fsnotify PR
check the daemon config in /etc/docker/daemon.json for docker log configuration.
and you need to check container configuration with docker inspect to see the log options.
Sometimes I try to look into the /var/lib/docker/containers/Container-ID/Container-ID.json to see the log if you use json-file log format.
If you use journald, you may find the log in /var/log/messages
The goal is to run docker containers on my nanoPI in the same manner as on a ubuntu server machine.
I have recently run into the following error when attempting docker run -it kylemanna/openvpn:
standard_init_linux.go:185: exec user process caused "exec format error"
I also get the same error when executing docker-compose using the container approach
Since I get the problem whether I use docker compose or not, I am starting to think that the error might be my usage of docker on the nanoPI. It may not be supported in the same way.
However, I can execute other containers/images just fine, hello-world, ubuntu, etc.
How do I go about determining the cause of this error? Where is the source code for standard_init_linux.go:185? And, what am I doing incorrectly?
Through trial and error, I discovered that if I rebuilt the openvpn image directly from the github repository on the machine with which the container would be run using (docker build <url>), then this error was resolved for the openvpn container but not (yet) for docker-compose. I imagine rebuilding the docker-compose container will fix the issue with that one too.
This is most likely due to a binary not having been compiled for the machine type that I was using.
Source/Inspiration: https://github.com/moby/moby/issues/23865