Persist log across Docker container restarts - docker

So I have a microservice running as a Docker container under .NET Core and logging to Application Insights on Azure as it writes large amounts of data fetched from EventHub to a SQL Server.
Every once in a while I get an unhandled SqlException that appears to be thrown on a background thread, meaning that I can't catch it and handle it, nor can I fix this bug.
The workaround has been to set the restart policy to always and the service restarts. This works well, but now I can't track this exception in Application Insights.
I suppose the unhandled exception is written by the CLR to stderr so it appears in the Docker logs with some grepping, but is there a way to check for this on start up and subsequently log it to Application Insights so I can discover it without logging onto the Swarm cluster and grep for restart info?

To persist logs,
Approach 1
Mount docker log directory into host machine.
Example:
docker run -name Container_1 -v /host_dir/logs:/var/log/app docker_image:version
Docker container will write logs in /var/log/app directory.Now logs will be persisted in /host_dir/logs directory of host across docker restart,too.
Approach 2
Configure logging driver like syslog or fluentd in docker. You can look at https://docs.docker.com/engine/admin/logging/overview/ for configuring it.

Related

Make Docker ignore daemon.json configuration on start

Currently we have multiple docker containers(host A).
We send the logs from each docker container to logger(which is runs on docker container on another server).
Here is my daemon.json:
{
"log-driver":"gelf",
"log-opts":{
"gelf-address":"tcp://10.*.*.*:12201"
},
"dns":[
"10.*.*.*"
],
"icc":false
}
The problem is that if logger docker is not running and i restarting host A, they not starting because cannot connect to logger.
Is there any way to configure docker containers to start even if they cannot connect to logger configured in daemon.json?
Thank you.
With this you are not configuring docker containers, but the daemon itself. If you restart you host, you restart the daemon and on startup it reads the config. If the config is invalid or parts of it are not working, it doesn't start up. You can manually start up the docker daemon with a manual configuration like
dockerd --debug \
--tls=true \
--tlscert=/var/docker/server.pem \
--tlskey=/var/docker/serverkey.pem \
--host tcp://192.168.59.3:2376
see: Docker daemon documentation
Keep in mind, that it will keep running with those options, until it's restarted.
The logging settings in daemon.json are defaults for newly created containers. Changing this file will not change existing containers being restarted.
You may want to reconsider your logging design. One option is to swap out the logging driver for a logging forwarder, leaving the logs in the default json driver, and having another process monitor those and forward the logs to the remote server. This avoids blocking at the cost of missing some logs written just as the container is deleted (or very short lived containers). The other option is to improve the redundancy of your logging system since it is a single point of failure that blocks your workloads from running.

Replace log files with console logs in Apache NiFi Docker Container

I have a configured Docker environment and a Logging Driver which sends all logs to a logging server. For this to work with Apache NiFi all NiFi logs should be sent to StdOut and StdErr. By default NiFi Docker Container tails a nifi-app.log file, so all logs are routed to a Logging Driver.
There are two issues:
nifi-user.log messages are not tailed.
Log files are persisted in a separate volume. I don't want the logs to be stored anywhere except my central logging server.
There is one thread here but it does not resolve anything. The real problem is that even if to set all appender-refs to CONSOLE - all messages are intercepted by a org.apache.nifi.StdOut logger line by line. Setting log level of this logger to OFF turns off logging of any messages after "Launched Apache NiFi with Process ID" entry.
Is there a way to configure NiFi Docker image to avoid storing logs into files and route them directly to standard output?

how will docker service logs fetch logs?

I know docker service logs get logs from containers which are part of that service. But how will it fetch? is it fetch once and cache somewhere or every time I issue command "docker service logs" it will fetch the logs via network?
As mentioned in my comment and the other answer, docker engine always caches logs of the containers running on that docker engine and store them in /var/lib/docker/containers/<container id>/<container id>-json.log directory. When you do docker service logs from a machine where the containers of the said service are not running, docker always pulls the log from the machine over the network and it never caches.
That being said, the error you're facing received message length 1869051448 exceeding the max size 4194304 is because there might be a log line that is simply too long to fit in the gRPC object being sent across the network.
Solution
Specify the --tail <n> option to docker service logs where n is the number of lines from the end of the logs you want to see
Specify a task ID from docker service ps instead of a service name, giving you the logs from that task alone rather than the aggregated logs from across the service replicas.
This might still give you the error if you still have that long log line in your pulled logs.
By default docker logs to:
/var/lib/docker/containers/<container id>/<container id>-json.log
This question is already answered.
For some advanced logging options see logging drivers.

Running filebeat on docker host OS and collecting logs from containers

I have a server that is the host OS for multiple docker containers. Each of the containers contains an application that is creating logs. I want these logs to be sent to a single place by using the syslog daemon, and then I want filebeat to transmit this data to another server. Is it possible to install filebeat on the HOST OS (without making another container for filebeat), and make the containers applications' log data be collected by the syslog daemon and then consolidated in /var/log on the host OS? Thanks.
You need to share a volume with every container in order to get your logs in the host filesystem.
Then, you can install filebeat on the host and forward the logs where you want, as they were "standard" log files.
Please be aware that usually docker containers do not write they logs to real log files, but to stdout. That means that you'll probably need custom images in order to fix this logging problem.

Docker pid namespace and Host

When we run the same process in docker and in host system, how it differentiates one from the other, from the perspective of audit logs?
Can I view the process running in docker in host system?
You would not run the same process (same pid) in docker and in host, since the purpose of a container is to provide isolation (both processes and filesystem)
I mentioned in your previous question "Docker Namespace in kernel level" that the pid of a process run in a container could be made visible from the host.
But in term of audit log, you can configure logging drivers in order to follow only containers, and ignore processes running directly on host.
For instance, in this article, Mark configures rsyslog to isolate the Docker logs into their own file.
To do this create /etc/rsyslog.d/10-docker.conf and copy the following content into the file using your favorite text editor.
# Docker logging
daemon.* {
/var/log/docker.log
stop
}
In summary this will write all logs for the daemon category to /var/log/docker.log then stop processing that log entry so it isn’t written to the systems default syslog file.
That should be enough to clearly differentiate the host processes logs (in regular syslog) from the ones running in containers (in /var/log/docker.log)
Update May 2016: issue 10163 and --pid=container:id is closed by PR 22481 for docker 1.12, allowing to join another container's PID namespace.

Resources