Im trying to use this one, https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-syslog.yaml
Configured the syslog host, IP, protocol, applied it and only not so useful logs appear at my remote rsyslog server ( I mean it was not from any app or system pod logs, just this
Apr 15 15:42:05 fluentd-xzdgs fluentd: _BOOT_ID:cfd4dc3fdedb496c808df2fd8adeb9ac#011_MACHINE_ID:eXXXXXXXXXXbc28e1#011_HOSTNAME:ip-11.22.33.444.ap-southeast-1.compute.internal#011PRIORITY:6#011_UID:0#011_GID:0#011_CAP_EFFECTIVE:3fffffffff#011_SYSTEMD_SLICE:system.slice#011_TRANSPORT:stdout#011SYSLOG_FACILITY:3#011_STREAM_ID:03985e96bd7c458cbefaf81c6f866297#011SYSLOG_IDENTIFIER:kubelet#011_PID:3424#011_COMM:kubelet#011_EXE:/usr/bin/kubelet#011_CMDLINE:/usr/bin/kubelet --cloud-provider aws --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock --network-plugin cni --node-ip=111.222.333.444 --pod-infra-container-image=602401143452.dkr.ecr.ap-southeast-1.amazonaws.com/eks/pause:3.1-eksbuild.1 --v=2 --node-labels=eks.amazonaws.com/nodegroup-image=ami-04e2f0450bc3d0837,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/sourceLaunchTemplateVersion=1,eks.amazonaws.com/nodegroup=XXXXX-20220401043
I did not configure anythings else.
My k8s version is 1.21 EKS
Checked the fluentd ds pod, it started slowly from pattern not match to a complete loop with "\\\" a few sec laters.
the fluentd pod logs :
2022-04-15 15:48:43 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2022-04-15T15:48:42.671721363Z stdout F 2022-04-15 15:48:42 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \"2022-04-15T15:48:41.634512612Z stdout F 2022-04-15 15:48:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\"2022-04-15T15:48:40.596571231Z stdout F 2022-04-15 15:48:40 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\"2022-04-15T15:48:39.617967459Z stdout F 2022-04-15 15:48:39 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\"2022-04-15T15:48:38.628577821Z stdout F 2022-04-15 15:48:38 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:37.612301989Z stdout F 2022-04-15 15:48:37 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:36.569418367Z stdout F 2022-04-15 15:48:36 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2022-04-15T15:48:35.562340916Z stdout F 2022-04-15 15:48:35 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/podname-kr8mg_namespacename-ecc1e41b47da5ae6b34fd372475baf34e129540af59a3455f29541d6093eedb7.log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\"\\\\\\\"\\\"\""
How do i forward everythings in my application logs? my k8s app's logs are not json and with just multiline or single line logs with no structure or any formats.
Many Thanks!.
I have figured out, the default configuration on fluentd is dockerd, the newest k8s run with containerd, so I have to changed the parser type the cri. problem solved!
Related
I'm attempting send to Linux audit logs to an elastic endpoint. I've installed it via the RPM package. For context I am using CentOS Linux release 8.3.2011. My Linux audit logs are under: /var/log/audit/audit.log. I've checked and double check that the audit logs exist.
The logs never indicate that I'm ever tailing the logs. Here's my configuration:
<source>
#type tail
tag linux_logs.raw
path /var/log/audit/audit.log
read_from_head true
pos_file /etc/td-agent/test.pos
<parse>
#type regexp
expression /(?<message>.+)/
time_format %Y-%m-%d %H:%M:%S
utc true
</parse>
</source>
####
## Filter descriptions:
##
<filter **>
#type record_transformer
<record>
hostname "${hostname}"
timestamp "${time}"
</record>
</filter>
####
## Output descriptions:
##
<match **>
#type http
endpoint https://myendpoint/
open_timeout 2
headers {"Authorization":"Bearer <token> <token2>"}
<format>
#type json
</format>
<buffer>
#type memory
flush_interval 10s
compress gzip
</buffer>
</match>
The logs never indicate I'm ever tailing the audit.log file.
2021-06-14 14:42:59 -0400 [info]: starting fluentd-1.12.3 pid=10725 ruby="2.7.3"
2021-06-14 14:42:59 -0400 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2021-06-14 14:43:00 -0400 [info]: adding filter pattern="**" type="record_transformer"
2021-06-14 14:43:00 -0400 [info]: adding match pattern="**" type="http"
2021-06-14 14:43:00 -0400 [warn]: #0 Status code 503 is going to be removed from default `retryable_response_codes` from fluentd v2. Please add it by yourself if you wish
2021-06-14 14:43:00 -0400 [info]: adding source type="tail"
2021-06-14 14:43:00 -0400 [warn]: #0 define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label #FLUENT_LOG> instead
2021-06-14 14:43:00 -0400 [info]: #0 starting fluentd worker pid=10734 ppid=10731 worker=0
2021-06-14 14:43:00 -0400 [info]: #0 fluentd worker is now running worker=0
Is this a permissions issue?? The tailing works if I do a tmp file so it seems to be a permissions issue. Any ideas?
Yes it is a permission issue. Fluentd is installed by RPM, so the daemon run with "td-agent" user and "td-agent" group.
You need to check the "/var/log/audit/audit.log" file permissions and, in case you have:
-rw-------
I suggest you to run Fluentd as root. To do this, you need to change the "/lib/systemd/system/td-agent.service" file from:
[Service]
User=td-agent
Group=td-agent
to
[Service]
User=root
Group=root
Finally, do a daemon-reload and a service (Fluentd) restart
Running Fluentd 3.5, which seems to give up after failing to flush the buffer. I can see there is a retry_forever parameter which is currently set to false, however I rather find out what is causing it and set a retry threshold to something higher
Config:
<source>
#type tail
path "XXX"
tag "XXX"
pos_file "XXX"
<parse>
#type "json"
</parse>
</source>
<match *.**>
#type forward
compress gzip
buffer_type file
buffer_path d:\dynamo\td-agent\buffer
flush_interval 10m
<server>
host "XXX"
port XXX
</server>
<buffer tag>
#type file
path XXX
flush_interval 10m
</buffer>
</match>
Logs
2019-09-30 13:53:03 +0100 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2019-09-30 13:53:04 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin/out_forward/load_balancer.rb:55:in `select_healthy_node'
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin/out_forward.rb:321:in `write'
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1122:in `try_flush'
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1428:in `flush_thread_run'
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:458:in `block (2 levels) in start'
2019-09-30 13:53:03 +0100 [warn]: #0 d:/Dynamo/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.7.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-09-30 13:53:04 +0100 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2019-09-30 13:53:05 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 13:53:04 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 13:53:05 +0100 [warn]: #0 failed to flush the buffer. retry_time=2 next_retry_seconds=2019-09-30 13:53:07 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 13:53:19 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 13:53:35 +0100 [warn]: #0 failed to flush the buffer. retry_time=6 next_retry_seconds=2019-09-30 13:54:06 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 13:53:35 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 13:54:06 +0100 [warn]: #0 failed to flush the buffer. retry_time=7 next_retry_seconds=2019-09-30 13:55:11 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 13:54:06 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 13:57:11 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 14:00:00 +0100 [info]: #0 detected rotation of d:/Dynamo/logs/dynamo-service-agent-2019-09-30-13.log; waiting 5 seconds
2019-09-30 14:00:24 +0100 [info]: #0 following tail of d:/Dynamo/logs/dynamo-service-agent-2019-09-30-14.log
2019-09-30 14:01:50 +0100 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2019-09-30 14:10:29 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 14:01:50 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 14:10:29 +0100 [warn]: #0 failed to flush the buffer. retry_time=11 next_retry_seconds=2019-09-30 14:29:15 +0100 chunk="593c4937d535515d77cffca381c87720" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
2019-09-30 14:10:29 +0100 [warn]: #0 suppressed same stacktrace
2019-09-30 14:12:22 +0100 [info]: Received graceful stop
2019-09-30 14:12:22 +0100 [info]: Worker 0 finished with status 0
2019-09-30 14:13:24 +0100 [info]: parsing config file is succeeded path="d:\\dynamo\\td-agent\\etc\\td-agent\\td-agent.conf"
2019-09-30 14:13:24 +0100 [info]: adding forwarding server 'XXX' host="XXX" port=XXX weight=60 plugin_id="object:17760cc"
After the above Fluentd stops forwarding logs...what have I missed?
Regards
if you are using Kubernetes, please check that your memory and CPU reach the limit.
The pod will get a signal (SIGKILL) when exceeds its memory limit.
try to increase the CPU or memory to suit your system's needs.
I'm following the steps to enable TLS/SSL encryption using td-agent and I cannot get the test to pass (https://docs.fluentd.org/v1.0/articles/in_forward#how-to-enable-tls/ssl-encryption):
1) Created the certs,
$ openssl req -new -x509 -sha256 -days 1095 -newkey rsa:2048 -keyout fluentd.key -out fluentd.crt
2) Installed them,
$ sudo mkdir -p /etc/td-agent/certs
$ sudo mv fluentd.key fluentd.crt /etc/td-agent/certs
$ sudo chown td-agent:td-agent -R /etc/td-agent/certs
$ sudo chmod 700 /etc/td-agent/certs/
$ sudo chmod 400 /etc/td-agent/certs/fluentd.key
3) Configured td-agent.conf,
$ sudo cat /etc/td-agent/td-agent.conf
<source>
#type forward
<transport>
cert_path /etc/td-agent/certs/fluentd.crt
private_key_path /etc/td-agent/certs/fluentd.key
private_key_passphrase testing
</transport>
</source>
<match debug.**>
#type stdout
</match>
4) Restarted the service,
$ sudo systemctl restart td-agent
5) When I try the test,
$ echo -e '\x93\xa9debug.tls\xceZr\xbc1\x81\xa3foo\xa3bar' | openssl s_client -connect localhost:24224
I get this on /var/log/td-agent/td-agent.log tail,
2018-05-05 12:06:08 -0300 [info]: #0 fluentd worker is now running worker=0
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=22 2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=3
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=1 2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=1
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=44
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=1
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=0
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=1
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=40 2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=3
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=3 2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg="C\x91\xA4Qz\xB4\xD2\xF1\x85&2\u07F5\u0004\xC2F\x9C\xEDt\x89\u0012\xF2\u0535"
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=33
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=13
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=103
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=65
2018-05-05 12:06:33 -0300 [warn]: #0 incoming chunk is broken: host="127.0.0.1" msg=103
2018-05-05 12:06:33 -0300 [error]: #0 unexpected error on reading data host="127.0.0.1" port=59102 error_class=MessagePack::MalformedFormatError error="invalid byte"
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/in_forward.rb:247:in `feed_each'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/in_forward.rb:247:in `block (2 levels) in read_messages'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/in_forward.rb:256:in `block in read_messages'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin_helper/server.rb:588:in `on_read_without_connection'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/cool.io-1.5.3/lib/cool.io/io.rb:123:in `on_readable'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/cool.io-1.5.3/lib/cool.io/io.rb:186:in `on_readable'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run_once'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin_helper/event_loop.rb:84:in `block in start'
2018-05-05 12:06:33 -0300 [error]: #0 /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
To be sure I've tested the self-signed key / crt pair with,
$ openssl rsa -modulus -noout -in fluentd.key | openssl md5
Enter pass phrase for fluentd.key:
(stdin)= b149fbd30d9192f3c3b5e445f757bbf1
$ openssl x509 -modulus -noout -in fluentd.crt | openssl md5
(stdin)= b149fbd30d9192f3c3b5e445f757bbf1
I'm running td-agent 1.0.2 on ubuntu server 16.04.
To be honest I don't know exactly where to continue..
I was running on the same issue and after hours of investigation I was able to solve it.
The problem is on the block [transport tls] where the official documentation at https://docs.fluentd.org/v1.0/articles/in_forward is omitting the tls from the block.
Adding tls on it solved the problem.
In summary, edit your in_forward to the following:
<source>
#type forward
<transport tls>
cert_path ....
private_key_path ...
private_key_passphrase YOUR_PASSPHRASE
</transport>
Once edited, the echo test command will succeed.
echo -e '\x93\xa9debug.tls\xceZr\xbc1\x81\xa3foo\xa3bar' | openssl s_client -connect localhost:24224
Fluentd log output:
018-05-14 19:15:55.906208368 +0100 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}
2018-02-01 07:05:21.000000000 +0000 debug.tls: {"foo":"bar"}
I'm trying the run fluentd docker example following https://docs.fluentd.org/v0.12/articles/install-by-docker
Unable to make request to the container. Hitting with the below error.
$curl -X POST -d 'json={"json":"message"}' http://localhost:9880/sample.test
curl: (56) Recv failure: Connection reset by peer
I tried to telnet:
$ telnet localhost 9880
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
Looks like the docker container is running successfully:
$ docker run -p 9880:9880 -it --rm --privileged=true -v /tmp/fluentd:/fluentd/etc -e FLUENTD_CONF=fluentd.conf fluent/fluentd
2018-04-09 12:41:18 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluentd.conf"
2018-04-09 12:41:18 +0000 [info]: using configuration file: <ROOT>
<source>
#type http
port 9880
bind "0.0.0.0"
</source>
<match **>
#type stdout
</match>
</ROOT>
2018-04-09 12:41:18 +0000 [info]: starting fluentd-1.1.3 pid=7 ruby="2.4.4"
2018-04-09 12:41:18 +0000 [info]: spawn command to main: cmdline=["/usr/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluentd.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2018-04-09 12:41:19 +0000 [info]: gem 'fluentd' version '1.1.3'
2018-04-09 12:41:19 +0000 [info]: adding match pattern="**" type="stdout"
2018-04-09 12:41:19 +0000 [info]: adding source type="http"
2018-04-09 12:41:19 +0000 [info]: #0 starting fluentd worker pid=17 ppid=7 worker=0
2018-04-09 12:41:19 +0000 [info]: #0 fluentd worker is now running worker=0
2018-04-09 12:41:19.135995928 +0000 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}
I just made all steps in the example. No errors, everything works good.
Check if 9880 port is open ( netstat -neta |grep 9880 ).
Maybe you have a firewall (windows) or some iptables rules.
It seems a firewall problem. Please check it.
I want my hello-world container to output to fluentD - and I'd like FluentD to dynamically set it to a folder
The idea is to start container like this
docker run --log-driver=fluentd --log-opt fluentdLogsDirName=docker.{{.NAME}} hello-world
and the config file of the fluentd is like this
<source>
#type forward
</source>
<match docker.**>
#type file
path /var/log/fluentd/#{fluentdLogsDirName}
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
compress gzip
utc
</match>
Thing is - fluentD has errors on startup (BTW fluentd also is a docker)
2016-03-28 14:48:56 +0000 [info]: reading config file path="/fluentd/etc/test.conf"
2016-03-28 14:48:56 +0000 [info]: starting fluentd-0.12.21
2016-03-28 14:48:56 +0000 [info]: gem 'fluentd' version '0.12.21'
2016-03-28 14:48:56 +0000 [info]: adding match pattern="docker.**" type="stdout"
2016-03-28 14:48:56 +0000 [info]: adding match pattern="docker.**" type="file"
2016-03-28 14:48:56 +0000 [error]: config error file="/fluentd/etc/test.conf" error="out_file: `/var/log/fluentd/\#{fluentdLogsDirName}.20160328_0.log` is not writable"
2016-03-28 14:48:56 +0000 [info]: process finished code=256
2016-03-28 14:48:56 +0000 [warn]: process died within 1 second. exit.
started my fluendd containter:
docker run -it -p 24224:24224 -v /blabla:/fluentd/etc -e FLUENTD_CONF=test.conf fluent/fluentd:latest
http://docs.fluentd.org/articles/out_file
I don't think fluentdLogsDirName currently is an option for the fluentd log-driver in docker; https://docs.docker.com/engine/admin/logging/fluentd/
Also, Go templates ({{ .Name }}) are only supported for tags (https://docs.docker.com/engine/admin/logging/log_tags/) and not for other options to logging drivers.
So at this moment, I don't think this is possible