how will docker service logs fetch logs? - docker

I know docker service logs get logs from containers which are part of that service. But how will it fetch? is it fetch once and cache somewhere or every time I issue command "docker service logs" it will fetch the logs via network?

As mentioned in my comment and the other answer, docker engine always caches logs of the containers running on that docker engine and store them in /var/lib/docker/containers/<container id>/<container id>-json.log directory. When you do docker service logs from a machine where the containers of the said service are not running, docker always pulls the log from the machine over the network and it never caches.
That being said, the error you're facing received message length 1869051448 exceeding the max size 4194304 is because there might be a log line that is simply too long to fit in the gRPC object being sent across the network.
Solution
Specify the --tail <n> option to docker service logs where n is the number of lines from the end of the logs you want to see
Specify a task ID from docker service ps instead of a service name, giving you the logs from that task alone rather than the aggregated logs from across the service replicas.
This might still give you the error if you still have that long log line in your pulled logs.

By default docker logs to:
/var/lib/docker/containers/<container id>/<container id>-json.log
This question is already answered.
For some advanced logging options see logging drivers.

Related

Is is possible to show the service logs of only running Docker Swarm containers?

Currently the docker service logs command shows logs of all services, also the ones that already terminated. The logs are not guaranteed to be displayed in chronological order.
Is it possible to somehow hide the logs of containers which have been terminated?
The problem is that after doing
docker service update --force my_service
the docker service logs command shows logs which are sorted by the time and service name, and the logs from newly started containers may appear before the logs from the terminated containers.

Persist log across Docker container restarts

So I have a microservice running as a Docker container under .NET Core and logging to Application Insights on Azure as it writes large amounts of data fetched from EventHub to a SQL Server.
Every once in a while I get an unhandled SqlException that appears to be thrown on a background thread, meaning that I can't catch it and handle it, nor can I fix this bug.
The workaround has been to set the restart policy to always and the service restarts. This works well, but now I can't track this exception in Application Insights.
I suppose the unhandled exception is written by the CLR to stderr so it appears in the Docker logs with some grepping, but is there a way to check for this on start up and subsequently log it to Application Insights so I can discover it without logging onto the Swarm cluster and grep for restart info?
To persist logs,
Approach 1
Mount docker log directory into host machine.
Example:
docker run -name Container_1 -v /host_dir/logs:/var/log/app docker_image:version
Docker container will write logs in /var/log/app directory.Now logs will be persisted in /host_dir/logs directory of host across docker restart,too.
Approach 2
Configure logging driver like syslog or fluentd in docker. You can look at https://docs.docker.com/engine/admin/logging/overview/ for configuring it.

Running filebeat on docker host OS and collecting logs from containers

I have a server that is the host OS for multiple docker containers. Each of the containers contains an application that is creating logs. I want these logs to be sent to a single place by using the syslog daemon, and then I want filebeat to transmit this data to another server. Is it possible to install filebeat on the HOST OS (without making another container for filebeat), and make the containers applications' log data be collected by the syslog daemon and then consolidated in /var/log on the host OS? Thanks.
You need to share a volume with every container in order to get your logs in the host filesystem.
Then, you can install filebeat on the host and forward the logs where you want, as they were "standard" log files.
Please be aware that usually docker containers do not write they logs to real log files, but to stdout. That means that you'll probably need custom images in order to fix this logging problem.

Is there any way to get logs in Docker Swarm?

I want to see the logs from my Docker Swarm service. Not only because I want all my logs to be collected for the usual reason, but also because I want to work out why the service is crashing with "task: non-zero exit (1)".
I see that there is work to implement docker logs in the pipeline, but there a way to access logs for production services? Or is Docker Swarm not ready for production wrt logging?
With Docker Swarm 17.03 you can now access the logs of a multi instance service via command line.
docker service logs -f {NAME_OF_THE_SERVICE}
You can get the name of the service with:
docker service ls
Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode:
Update: docker service logs is now a standard feature of docker >= 17.06. https://docs.docker.com/engine/reference/commandline/service_logs/#parent-command
similar question: How to log container in docker swarm mode
What we've done successfully is utilize GrayLog. If you look at docker run documentation, you can specify a log-driver and log-options that allow you to send all console messages to a graylog cluster.
docker run... --log-driver=gelf --log-opt gelf-address=udp://your.gelf.ip.address:port --log-opt tag="YourIdentifier"
You can also technically configure it at the global level for the docker daemon, but I would advise against that. It won't let you add the "Tag" option, which is exceptionally useful for filtering down your results.
Docker service definitions also support log driver and log options so you can use docker service update to adjust your services without destroying them.
As the documents says:
docker service logs [OPTIONS] SERVICE|TASK
resource: https://docs.docker.com/engine/reference/commandline/service_logs/

Docker pid namespace and Host

When we run the same process in docker and in host system, how it differentiates one from the other, from the perspective of audit logs?
Can I view the process running in docker in host system?
You would not run the same process (same pid) in docker and in host, since the purpose of a container is to provide isolation (both processes and filesystem)
I mentioned in your previous question "Docker Namespace in kernel level" that the pid of a process run in a container could be made visible from the host.
But in term of audit log, you can configure logging drivers in order to follow only containers, and ignore processes running directly on host.
For instance, in this article, Mark configures rsyslog to isolate the Docker logs into their own file.
To do this create /etc/rsyslog.d/10-docker.conf and copy the following content into the file using your favorite text editor.
# Docker logging
daemon.* {
/var/log/docker.log
stop
}
In summary this will write all logs for the daemon category to /var/log/docker.log then stop processing that log entry so it isn’t written to the systems default syslog file.
That should be enough to clearly differentiate the host processes logs (in regular syslog) from the ones running in containers (in /var/log/docker.log)
Update May 2016: issue 10163 and --pid=container:id is closed by PR 22481 for docker 1.12, allowing to join another container's PID namespace.

Resources