Stopping docker container from generating logs when fluentd logging driver is used - docker

I am forwarding my docker containers log to fluentd at remote location. I do not want container to generate logs anymore locally on the same machine ( just to avoid storage/ space consumption).
Following is my configuration in docker-compose file for my services.
logging:
driver: fluentd
options:
fluentd-address: dev-fluentd_ip.com:2224
Is there any way to stop docker containers to stop generating logs locally?

Related

Disable ipv6 in nginx inside docker container

I am struggling with a docker container and the network configuration. I am running it on a server that does not have IPv6 enabled for policy reasons, and I am using a docker image that uses nginx and IPv6 is enabled in the nginx config inside the docker image.
I have tried everything to disable IPv6 inside the image and in the docker-compose.yml file, but it just won't work. Whenever I try to bring up the compose with docker compose up the log constantly says nginx: [emerg] socket() [::]:5500 failed (97: Address family not supported by protocol).
I tried disabling IPv6 in the docker-compose.yml file
networks:
cont:
driver: bridge
enable_ipv6: false
But it still gives out the same error. I have tried entering the container with docker exec -it container /bin/bash, but the container is constantly restarting and I cannot enter the container to change the configuration, and if I remove the restart: unless-stopped parameter the container just stops and won't allow me to enter, so I can't add tty: true either because the container constantly stops and restarts.
How can I disable IPv6 for good and avoid nginx giving an error and restarting the whole container without having to enable IPv6 in my server, which I cannot do?
Edit: I have also tried adding following to the compose file
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
but I get Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: open /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory: unknown (I'm running RHEL 8.7 and docker 23.0.1, btw).

Cannot send UDP packets over a local network using Docker-Compose

I have an application that is meant to send UDP messages to other devices on a local network. When I run the application as a standalone Docker container using the docker run command, the behavior is as expected and the messages are sent to the correct address and port that corresponds to a computer on the local network. Please note that it works whether or not I run it with the bridge or host network. However, when attempting to run the application through docker compose the UDP messages are not sent. To verify that there was no conflict with other containers running in compose, I ran the container on its own in docker-compse and the messages were still not being sent. I tried running the container in docker-compose while specifying network_mode: host as well. I checked Wireshark and it reported that UDP messages were being sent when the application was started with docker run but none appearend when running with docker-compose. Additionally, I enabled Ipv4 forwarding from docker containers to the outside world on the host machine as described here with no luck.
Here are the two ways I am running the container:
Docker:
docker run --network host -e OUTPUT=192.168.1.3:14551 container_name
Docker-Compose:
version: "3"
services:
name:
image: name
network_mode: host # have tried with and without this
environment:
- OUTPUT=192.168.1.3:14551
I have also tried exposing the 14551 port in a ports section of the docker-compose, however that did not change anything.
What could explain the difference in behavior with docker vs docker-compose? Is it due to an extra layer of networking with docker compose specifically? Is there a workaround to get docker-compose working?

How do you set up Filebeat on docker?

I am trying to set up Filebeat on Docker. The rest of the stack (Elastic, Logstash, Kibana) is already set up.
I want to forward syslog files from /var/log/ to Logstash with Filebeat. I created a new filebeat.yml file on the host system under /etc/filebeat/(I created this filebeat directory, not sure if that's correct?):
output:
logstash:
enabled: true
hosts: ["localhost:5044"]
filebeat:
inputs:
-
paths:
- /var/log/syslog
- /var/log/auth.log
document_type: syslog
Then I ran the Filebeat container: sudo docker run -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:7.4.2
It is able to run, but no files are actually being forwarded to logstash. I am thinking the issue is with the filebeat.yml configuration...
Any thoughts?
As David Maze intimated, the reason that filebeat isn't forwarding any logs is because it only has access to logs within the container. You can share those logs using another bind mount. My preferred option when using docker + filebeat is to have filebeat listen on a TCP/IP port and have the log source forward logs to that port.

Does docker log-level impact logging driver or only logs of docker daemon?

I have my logging driver setup of journald. Does the log-level config in daemon.json file impact logs when using a logging driver or only the container logs when using docker logs <container_name> ?
For example, docker and journald have documentation showing how to set log level/priority.
Docker's default setting is info: log-level: info.
With journald I can also use -p to set the log priority to info: journalctl -p info.
If my docker logging driver is journald with log priority set to info, do I even need to worry about setting log-level to info in daemon.json file?
I think maybe you confused the following concepts: logs of docker daemon, logs of container(s) and logs print with journalctl command.
The configuration in docker.json file impact logs of docker
daemon.
The logs of container(s) would be only impacted by your application
configuration in that container.
The command journalctl -p ONLY impact the logs showing on your
screen, which means -p only do the filtering thing. No matter what
level you've indicated, err or info, the logs are there already.
Hope this would be helpful.

using syslog to ship the docker container logs to logstash

Can logs in a docker container ... say logs located in /var/log/syslog get shipped to logstash without using any additional components such as lumberjack and logspout?
Just wondering because I set up an environment and tried to make it work with syslog (so syslog ships the logs from docker container to logstash) but for now it's not working .. just wondering if there's something wrong with my logic.
There's no way for messages in /var/log/syslog to magically route to logstash without something configured to forward messages. Something must send the logs to logstash. You have a few options:
Configure your app to send log messages to stdout rather than to /var/log/syslog, and run logspout to collect stdout from all the running containers and send messages to your logstash endpoint.
Run rsyslog inside your container and configure a syslog daemon such as rsyslog to send messages to your logstash endpoint
Bind mount /dev/log from the host to your container by passing -v /dev/log:/dev/log to docker run when starting your container. On the host, configure your syslog daemon to send messages to logstash.
You could use the docker syslog driver to send docker logs straight from docker containers to logstash. Just have to add some parameters when you run your container
https://docs.docker.com/engine/admin/logging/overview/#supported-logging-drivers

Resources