Docker fluentd | Unable to forward request from host to docker deamon - docker

I'm trying the run fluentd docker example following https://docs.fluentd.org/v0.12/articles/install-by-docker
Unable to make request to the container. Hitting with the below error.
$curl -X POST -d 'json={"json":"message"}' http://localhost:9880/sample.test
curl: (56) Recv failure: Connection reset by peer
I tried to telnet:
$ telnet localhost 9880
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
Looks like the docker container is running successfully:
$ docker run -p 9880:9880 -it --rm --privileged=true -v /tmp/fluentd:/fluentd/etc -e FLUENTD_CONF=fluentd.conf fluent/fluentd
2018-04-09 12:41:18 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluentd.conf"
2018-04-09 12:41:18 +0000 [info]: using configuration file: <ROOT>
<source>
#type http
port 9880
bind "0.0.0.0"
</source>
<match **>
#type stdout
</match>
</ROOT>
2018-04-09 12:41:18 +0000 [info]: starting fluentd-1.1.3 pid=7 ruby="2.4.4"
2018-04-09 12:41:18 +0000 [info]: spawn command to main: cmdline=["/usr/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluentd.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2018-04-09 12:41:19 +0000 [info]: gem 'fluentd' version '1.1.3'
2018-04-09 12:41:19 +0000 [info]: adding match pattern="**" type="stdout"
2018-04-09 12:41:19 +0000 [info]: adding source type="http"
2018-04-09 12:41:19 +0000 [info]: #0 starting fluentd worker pid=17 ppid=7 worker=0
2018-04-09 12:41:19 +0000 [info]: #0 fluentd worker is now running worker=0
2018-04-09 12:41:19.135995928 +0000 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}

I just made all steps in the example. No errors, everything works good.
Check if 9880 port is open ( netstat -neta |grep 9880 ).
Maybe you have a firewall (windows) or some iptables rules.
It seems a firewall problem. Please check it.

Related

how to configure the fluentd daemonset syslog and forward everything?

Im trying to use this one, https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-syslog.yaml
Configured the syslog host, IP, protocol, applied it and only not so useful logs appear at my remote rsyslog server ( I mean it was not from any app or system pod logs, just this
Apr 15 15:42:05 fluentd-xzdgs fluentd: _BOOT_ID:cfd4dc3fdedb496c808df2fd8adeb9ac#011_MACHINE_ID:eXXXXXXXXXXbc28e1#011_HOSTNAME:ip-11.22.33.444.ap-southeast-1.compute.internal#011PRIORITY:6#011_UID:0#011_GID:0#011_CAP_EFFECTIVE:3fffffffff#011_SYSTEMD_SLICE:system.slice#011_TRANSPORT:stdout#011SYSLOG_FACILITY:3#011_STREAM_ID:03985e96bd7c458cbefaf81c6f866297#011SYSLOG_IDENTIFIER:kubelet#011_PID:3424#011_COMM:kubelet#011_EXE:/usr/bin/kubelet#011_CMDLINE:/usr/bin/kubelet --cloud-provider aws --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock --network-plugin cni --node-ip=111.222.333.444 --pod-infra-container-image=602401143452.dkr.ecr.ap-southeast-1.amazonaws.com/eks/pause:3.1-eksbuild.1 --v=2 --node-labels=eks.amazonaws.com/nodegroup-image=ami-04e2f0450bc3d0837,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/sourceLaunchTemplateVersion=1,eks.amazonaws.com/nodegroup=XXXXX-20220401043
I did not configure anythings else.
My k8s version is 1.21 EKS
Checked the fluentd ds pod, it started slowly from pattern not match to a complete loop with "\\\" a few sec laters.
the fluentd pod logs :
2022-04-15 15:48:43 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2022-04-15T15:48:42.671721363Z stdout F 2022-04-15 15:48:42 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \"2022-04-15T15:48:41.634512612Z stdout F 2022-04-15 15:48:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\"2022-04-15T15:48:40.596571231Z stdout F 2022-04-15 15:48:40 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\"2022-04-15T15:48:39.617967459Z stdout F 2022-04-15 15:48:39 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\"2022-04-15T15:48:38.628577821Z stdout F 2022-04-15 15:48:38 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:37.612301989Z stdout F 2022-04-15 15:48:37 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:36.569418367Z stdout F 2022-04-15 15:48:36 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:35.562340916Z stdout F 2022-04-15 15:48:35 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/podname-kr8mg_namespacename-ecc1e41b47da5ae6b34fd372475baf34e129540af59a3455f29541d6093eedb7.log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\"\\\\\\\"\\\"\""
How do i forward everythings in my application logs? my k8s app's logs are not json and with just multiline or single line logs with no structure or any formats.
Many Thanks!.
I have figured out, the default configuration on fluentd is dockerd, the newest k8s run with containerd, so I have to changed the parser type the cri. problem solved!

Parsing linux audit logs with fluent.d

I'm attempting send to Linux audit logs to an elastic endpoint. I've installed it via the RPM package. For context I am using CentOS Linux release 8.3.2011. My Linux audit logs are under: /var/log/audit/audit.log. I've checked and double check that the audit logs exist.
The logs never indicate that I'm ever tailing the logs. Here's my configuration:
<source>
#type tail
tag linux_logs.raw
path /var/log/audit/audit.log
read_from_head true
pos_file /etc/td-agent/test.pos
<parse>
#type regexp
expression /(?<message>.+)/
time_format %Y-%m-%d %H:%M:%S
utc true
</parse>
</source>
####
## Filter descriptions:
##
<filter **>
#type record_transformer
<record>
hostname "${hostname}"
timestamp "${time}"
</record>
</filter>
####
## Output descriptions:
##
<match **>
#type http
endpoint https://myendpoint/
open_timeout 2
headers {"Authorization":"Bearer <token> <token2>"}
<format>
#type json
</format>
<buffer>
#type memory
flush_interval 10s
compress gzip
</buffer>
</match>
The logs never indicate I'm ever tailing the audit.log file.
2021-06-14 14:42:59 -0400 [info]: starting fluentd-1.12.3 pid=10725 ruby="2.7.3"
2021-06-14 14:42:59 -0400 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2021-06-14 14:43:00 -0400 [info]: adding filter pattern="**" type="record_transformer"
2021-06-14 14:43:00 -0400 [info]: adding match pattern="**" type="http"
2021-06-14 14:43:00 -0400 [warn]: #0 Status code 503 is going to be removed from default `retryable_response_codes` from fluentd v2. Please add it by yourself if you wish
2021-06-14 14:43:00 -0400 [info]: adding source type="tail"
2021-06-14 14:43:00 -0400 [warn]: #0 define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label #FLUENT_LOG> instead
2021-06-14 14:43:00 -0400 [info]: #0 starting fluentd worker pid=10734 ppid=10731 worker=0
2021-06-14 14:43:00 -0400 [info]: #0 fluentd worker is now running worker=0
Is this a permissions issue?? The tailing works if I do a tmp file so it seems to be a permissions issue. Any ideas?
Yes it is a permission issue. Fluentd is installed by RPM, so the daemon run with "td-agent" user and "td-agent" group.
You need to check the "/var/log/audit/audit.log" file permissions and, in case you have:
-rw-------
I suggest you to run Fluentd as root. To do this, you need to change the "/lib/systemd/system/td-agent.service" file from:
[Service]
User=td-agent
Group=td-agent
to
[Service]
User=root
Group=root
Finally, do a daemon-reload and a service (Fluentd) restart

AWS ECS container exiting without specific reason

ECS service is exiting after running for a couple of seconds. In case we make a container out of the image manually it is running fine, command: sudo docker run -i -p 9100:9100 -p 9110:9110 -p 9120:9120 -p 9130:9130 847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap
ECS logs are as follows:
2017-01-23T19:37:00Z [INFO] Created docker container for task bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (CREATED->RUNNING) Containers: [bytemark-cap-container (CREATED->RUNNING),]: bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (CREATED->RUNNING) -> c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e
2017-01-23T19:37:00Z [INFO] Starting container module="TaskEngine" task="bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (CREATED->RUNNING) Containers: [bytemark-cap-container (CREATED->RUNNING),]" container="bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (CREATED->RUNNING)"
2017-01-23T19:37:01Z [INFO] Task change event module="TaskEngine" event="{TaskArn:arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e Status:RUNNING Reason: SentStatus:NONE}"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Sending container change module="eventhandler" event="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Redundant container state change for task bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (RUNNING->RUNNING) Containers: [bytemark-cap-container (RUNNING->RUNNING),]: bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (RUNNING->RUNNING) to RUNNING, but already RUNNING
2017-01-23T19:37:01Z [INFO] Sending task change module="eventhandler" event="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Task change event module="TaskEngine" event="{TaskArn:arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e Status:STOPPED Reason: SentStatus:RUNNING}"
2017-01-23T19:37:01Z [INFO] Error retrieving stats for container c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e: context canceled
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> STOPPED, Exit 0, , Known Sent: RUNNING"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> STOPPED, Known Sent: RUNNING"
2017-01-23T19:37:01Z [INFO] Container c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e is terminal, stopping stats collection
Is there a way to debug the actual cause of the ECS service exiting.
Thank you.
If you can SSH into the docker instances (if you made it through ECS and specified a key or are using your own instances) you can get the logs from the docker.
I had an issue where my only docker container was dying after a few seconds so you can list the containers with
docker ps -a
Find the container you want to see the logs for and use
docker logs *container id*

Using Docker with Sensu

i am a Novice in Docker and wanted to use Sensu for monitoring containers. I have set up a Sensu server and Sensu client ( where my Docker containers are running ) using the below material:
Click [here] (http://devopscube.com/monitor-docker-containers-guide/)
I get the Sensu client information in Uchiwa Dashboard while running the below command:
docker run -d --name sensu-client --privileged \
-v $PWD/load-docker-metrics.sh:/etc/sensu/plugins/load-docker-metrics.sh \
-v /var/run/docker.sock:/var/run/docker.sock \
usman/sensu-client SENSU_SERVER_IP RABIT_MQ_USER RABIT_MQ_PASSWORD CLIENT_NAME CLIENT_IP
However, when i try to fire a new container from the same host machine , i do not get the information of the client in Uchiwa Dashboard.
It would be great if anyone have used Sensu with Docker to monitor Docker containers can guide on the same.
Thanks for the time.
Please logs of the sensu-client
'Supervisord is running as root and it is searching '
2017-01-09 04:11:47,210 CRIT Supervisor running as root (no user in config file)
2017-01-09 04:11:47,212 INFO supervisord started with pid 12
2017-01-09 04:11:48,214 INFO spawned: 'sensu-client' with pid 15
2017-01-09 04:11:49,524 INFO success: sensu-client entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-09 04:11:49,530 INFO exited: sensu-client (exit status 0; expected)
[ec2-user#ip-172-31-0-89 sensu-client]$ sudo su
[root#ip-172-31-0-89 sensu-client]# docker logs sensu-client
/usr/lib/python2.6/site-packages/supervisor-3.1.3-py2.6.egg/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (
including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-01-09 04:11:47,210 CRIT Supervisor running as root (no user in config file)
2017-01-09 04:11:47,212 INFO supervisord started with pid 12
2017-01-09 04:11:48,214 INFO spawned: 'sensu-client' with pid 15
2017-01-09 04:11:49,524 INFO success: sensu-client entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-09 04:11:49,530 INFO exited: sensu-client (exit status 0; expected)

how to send param from docker to fluentd to dynamically decide on file output

I want my hello-world container to output to fluentD - and I'd like FluentD to dynamically set it to a folder
The idea is to start container like this
docker run --log-driver=fluentd --log-opt fluentdLogsDirName=docker.{{.NAME}} hello-world
and the config file of the fluentd is like this
<source>
#type forward
</source>
<match docker.**>
#type file
path /var/log/fluentd/#{fluentdLogsDirName}
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
compress gzip
utc
</match>
Thing is - fluentD has errors on startup (BTW fluentd also is a docker)
2016-03-28 14:48:56 +0000 [info]: reading config file path="/fluentd/etc/test.conf"
2016-03-28 14:48:56 +0000 [info]: starting fluentd-0.12.21
2016-03-28 14:48:56 +0000 [info]: gem 'fluentd' version '0.12.21'
2016-03-28 14:48:56 +0000 [info]: adding match pattern="docker.**" type="stdout"
2016-03-28 14:48:56 +0000 [info]: adding match pattern="docker.**" type="file"
2016-03-28 14:48:56 +0000 [error]: config error file="/fluentd/etc/test.conf" error="out_file: `/var/log/fluentd/\#{fluentdLogsDirName}.20160328_0.log` is not writable"
2016-03-28 14:48:56 +0000 [info]: process finished code=256
2016-03-28 14:48:56 +0000 [warn]: process died within 1 second. exit.
started my fluendd containter:
docker run -it -p 24224:24224 -v /blabla:/fluentd/etc -e FLUENTD_CONF=test.conf fluent/fluentd:latest
http://docs.fluentd.org/articles/out_file
I don't think fluentdLogsDirName currently is an option for the fluentd log-driver in docker; https://docs.docker.com/engine/admin/logging/fluentd/
Also, Go templates ({{ .Name }}) are only supported for tags (https://docs.docker.com/engine/admin/logging/log_tags/) and not for other options to logging drivers.
So at this moment, I don't think this is possible

Resources