Is there a way to supply FluentD with multiple Conf files for separate use cases? - fluentd

I have a FluentD ingestion pipelike for EFK. As the tooling is growing I have additional Apps which want to be a part of this instance. Instead of each app hosting their own version of EFK, they wanted to attach to the existing one. When looking at the Dockerization of FluentD, i was thinking "well there must be a way to give fluentd a folder and then just execute all the conf files in that folder"
The fundamentals for Dockerfile pretty much builds the container and runs the CLI for fluentd. This uses the -c flag, noted in: https://docs.fluentd.org/deployment/command-line-option
So I was thinking there might be a way to start fluentd with a Folder instead so that way we could have 1 instance of fluentd accomplishing multiple pipelines. An alternative would be have a Fluentd instance for every app, which would pipe into Elasticsearch.
While both ways could work, I was thinking that Adding additional conf files would make the most sense, especially since its ingestion isnt at a point of being overworked. Then when something is caught it would follow whichever pipeline fits.
Is there a way to instead of using -c to pass in a config file, to do pass in a Folder, or multiple -c flags? I was looking at the CLI and didnt see anything offhand.
Is multiple instances of FluentD then the best way to go?

Related

Is there a global hook mechanism for Docker Watchtower?

I am currently developing a service composed of multiple containers, described in a docker-compose file.
I need an automated mechanism to update my container images. Watchtower seems to be an appropriate solution, except that i need to call a script before any of the containers update. The pre-update hook could do the trick, but i would have to duplicate my script in each images of my service.
Do you know if there is a way to add a "global hook" triggered if any container in my docker-compose is about to update ?
If there is not, do you know which tool i should use to have this kind of behaviour ?
Thanks
Unfortunately, this is not possible currently.
However, you can easily work around this by mounting your script file into the containers you want to check (preferably using read-only), as the lifecycle hooks allow you to execute script files. This would mean you don't have to duplicate the actual script for each container, but are able to keep it in one place.

Persisting docker container logs in kubernetes

I’m looking for a really simple, lightweight way of persisting logs from a docker container running in kubernetes. I just want the stdout (and stderr I guess) to go to persistent disk, I don’t want anything else for analysing the logs, to send them over the internet to a third party, etc. as part of this.
Having done some reading I’ve been considering a DaemonSet with the application container, but then another container which has /var/lib/docker/containers mounted and also a persistent volume (maybe NFS) mounted too. That container would then need a way to copy logs from the default docker JSON logging driver in /var/lib/docker/containers to the persistent volume, maybe rsync running regularly.
Would that work (presumably if the rsync container goes down it's going to miss stuff because nothing's queuing, perhaps that's ok rather than trying to queue potentially huge amounts of logs), is this a sensible approach for the desired outcome? It’s only for one or two containers if that makes a difference. Thanks.
Fluentd supports a simple file output plugin (https://docs.fluentd.org/output/file) which you can easily aim at a PersistentVolume mount. Otherwise you would configure Fluentd (or Bit if you prefer) just like normal for Kubernetes so find your favorite guide and follow it.

How to run Dockerfile or docker-compose file from automation suite

I am creating an automation framework using selenium and my entry point in execution is creating containers of different db types, load them with database dumps and then start with tests.
I have one simple and might be a foolish question
If I create a docker-compose file which creates the above mentioned container and generally we do docker-compose up command to run the docker compose file.
But can I control the docker-compose/Dockerfile when the execution is going on, like
Test starts from TestNG -> Before scripts execute to run the docker-compose file and create containers.
how can I control that?
Thanks in advance
I can think of the following options:
1- use ansible to deploy for you, you can write a play book with the steps
advantages: scaling, will manage everything for you, you can add notifications, but requires managing ansible itself and learning it.
2- use a shell script that will start things in order start the containers (or however you want the order to be) then start the TestNG, cheap and dirty solution.

Docker container with Elk stack to browse nginx and tomcat log files

I am trying to debug a production failure involving (multiple) nginx and tomcat logs. I have copied the logs to my dev machine. What is the easiest way for me to import these logs into an elastic/ELK stack to sift through quickly? (Currently, I'm making do with less commands across multiple windows)
So far I've found only generic docker containers (like https://elk-docker.readthedocs.io/) that require me to install filebeat and configure it. However, since my data is static, I would prefer a simpler installation.
What I did earlier is create the ELK stack with docker-compose and ingest the data via 'nc' (netcat). An example can be found at: https://github.com/deviantony/docker-elk
You might want to adjust the logstash config, so that it reads and parses your data correctly. If the amount of files is not too big, you can nc them one-by-one and otherwise you can write a small script around it, in bash for example, to loop through the files.

amazon ecs Container for Configuration files

at the moment I try to figure out a good setup for my application in amazon ecs.
My application needs a config file. Now I want to have a container to hold my config file so when I want to change something I don't need to redeploy my application.
I can't find any best practice method for this. What I found out is that the ecs tasks just make a docker run and you can't make a docker create.
Does anyone have an idea how I can manage my config files for my applications?
Most likely using Docker for this is overkill. How complex is the data? If it's simple key-value pairs I would use DynamoDB and get rid of the file completely. Another option would be using EFS for the file, or attaching/detaching an EBS volume.
You should not do that, it makes it fragile and you're not guaranteed to be able to access it from all containers across a cluster (or you end up having that on all instances which wastes resources). Why not package it up with the container as-is or package it as much as possible and provide environment variables to fill in the gap? If you really want to go this route I highly suggest something like S3

Resources