I have a configured Docker environment and a Logging Driver which sends all logs to a logging server. For this to work with Apache NiFi all NiFi logs should be sent to StdOut and StdErr. By default NiFi Docker Container tails a nifi-app.log file, so all logs are routed to a Logging Driver.
There are two issues:
nifi-user.log messages are not tailed.
Log files are persisted in a separate volume. I don't want the logs to be stored anywhere except my central logging server.
There is one thread here but it does not resolve anything. The real problem is that even if to set all appender-refs to CONSOLE - all messages are intercepted by a org.apache.nifi.StdOut logger line by line. Setting log level of this logger to OFF turns off logging of any messages after "Launched Apache NiFi with Process ID" entry.
Is there a way to configure NiFi Docker image to avoid storing logs into files and route them directly to standard output?
Related
I am working with logs in my system.
I want to use a log sidecar to collect business container's log.
And my business container's log will write to its STDOUT.
So I want to redirect this STDOUT to pod's volume file, because in a pod all containers share the same volume, so my sidecar can collect log from volume.
How should I configuer this?
I mean maybe I should write some configuration in my k8s yaml so k8s will automaticlly redirect the container's STDOUT to pod's volume?
Adding this 2>&1 > /<your_path_to_volume_inside_pod>/file.log to your command would redirect STDOUT and STDERR to a file
You could use a sidecar container with a logging agent
Streaming sidecar container
By having your sidecar containers write to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node.
The sidecar containers read logs from a file, a socket, or journald. Each sidecar container prints a log to its own stdout or stderr stream.
This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to stdout or stderr.
The logic behind redirecting logs is minimal, so it's not a significant overhead.
Additionally, because stdout and stderr are handled by the kubelet, you can use built-in tools like kubectl logs.
In your case, it depends on how your application pod can be configured (for intance, with the journald service active, in order to record logs)
And the backend would be your common volume file.
I have one question, Is there any way to ship the logs of each container where the log files are located inside the containers. Actually, the current flow will help to ship the log files which is located in the default path(var/lib/docker/containers//.log). I want to customize the filebeat.yaml to ship the logs from each container to logstash instead of the default path.
If you can set your containers to log to stdout rather than to files, it looks like filebeat has an autodiscover mode which will capture the docker logs of every container.
Another common setup in an ELK world is to configure logstash on your host, and set up Docker's logging options to send all output on containers' stdout into logstash. This makes docker logs not work, but all of your log output is available via Kibana.
If your container processes always write to log files, you can use the docker run -v option or the Docker Compose volumes: option to mount a host directory on to an individual container's /var/log directory. Then the log files will be visible on the host, and you can use whatever file-based collector to capture them. This is in the realm of routine changes that will require you to stop and delete your existing containers before starting them with different options.
I know docker service logs get logs from containers which are part of that service. But how will it fetch? is it fetch once and cache somewhere or every time I issue command "docker service logs" it will fetch the logs via network?
As mentioned in my comment and the other answer, docker engine always caches logs of the containers running on that docker engine and store them in /var/lib/docker/containers/<container id>/<container id>-json.log directory. When you do docker service logs from a machine where the containers of the said service are not running, docker always pulls the log from the machine over the network and it never caches.
That being said, the error you're facing received message length 1869051448 exceeding the max size 4194304 is because there might be a log line that is simply too long to fit in the gRPC object being sent across the network.
Solution
Specify the --tail <n> option to docker service logs where n is the number of lines from the end of the logs you want to see
Specify a task ID from docker service ps instead of a service name, giving you the logs from that task alone rather than the aggregated logs from across the service replicas.
This might still give you the error if you still have that long log line in your pulled logs.
By default docker logs to:
/var/lib/docker/containers/<container id>/<container id>-json.log
This question is already answered.
For some advanced logging options see logging drivers.
So I have a microservice running as a Docker container under .NET Core and logging to Application Insights on Azure as it writes large amounts of data fetched from EventHub to a SQL Server.
Every once in a while I get an unhandled SqlException that appears to be thrown on a background thread, meaning that I can't catch it and handle it, nor can I fix this bug.
The workaround has been to set the restart policy to always and the service restarts. This works well, but now I can't track this exception in Application Insights.
I suppose the unhandled exception is written by the CLR to stderr so it appears in the Docker logs with some grepping, but is there a way to check for this on start up and subsequently log it to Application Insights so I can discover it without logging onto the Swarm cluster and grep for restart info?
To persist logs,
Approach 1
Mount docker log directory into host machine.
Example:
docker run -name Container_1 -v /host_dir/logs:/var/log/app docker_image:version
Docker container will write logs in /var/log/app directory.Now logs will be persisted in /host_dir/logs directory of host across docker restart,too.
Approach 2
Configure logging driver like syslog or fluentd in docker. You can look at https://docs.docker.com/engine/admin/logging/overview/ for configuring it.
I have a server that is the host OS for multiple docker containers. Each of the containers contains an application that is creating logs. I want these logs to be sent to a single place by using the syslog daemon, and then I want filebeat to transmit this data to another server. Is it possible to install filebeat on the HOST OS (without making another container for filebeat), and make the containers applications' log data be collected by the syslog daemon and then consolidated in /var/log on the host OS? Thanks.
You need to share a volume with every container in order to get your logs in the host filesystem.
Then, you can install filebeat on the host and forward the logs where you want, as they were "standard" log files.
Please be aware that usually docker containers do not write they logs to real log files, but to stdout. That means that you'll probably need custom images in order to fix this logging problem.