I would like to set up log forwarding as part of a deployment process. The activity of the machines will be different but they will all log to specific places (notably /var/log).
Is it possible to configure fluentd so that it monitors a whole directory? (including the ability to pick up files which pop-up while it is active)
I know that in_tail can do this for a given, specified file but the documentation does not mention a whole directory.
There is an ideal exact duplicate of this question from 2014 which points to the tail_ex plugin. Unfortunately its description mentions that
Deprecated: Fluentd has the features of this plugin since 0.10.45. So,
the plugin no longer maintained
I still could not find the mentioned features.
Using the wildcard support within Fluentd's in_tail plugin this is absolutely possible. In the path section you would specify the /var/log/* directory and Fluentd will automatically skip files that are non-readable.
Additionally, if you write new files to this directory Fluentd will periodically scan based on the configuration item https://docs.fluentd.org/v0.12/articles/in_tail#refreshinterval
Some notes: If you use Treasure Data's packaged version of Fluentd, td-agent then you need to ensure that the files you want to tail are readable by the td-agent user that is provisioned as part of that install.
Lastly, if you need to securely read these files you may consider Treasure Data's Enterprise Fluentd offering
Related
I have a c++ application that uses log4cxx for logging. log4cxx configuration will be done via XML file where the logging level and different loggers can be enabled and disabled. with installation running in VM it was easy to make the necessary modifications as needed by getting into VM and changing the XML file manually. but now we are going to run the application as a docker image which will run in the cloud, so the question is how to make modifications around the logger level as and when needed. I did try to search for this before asking here, but the solutions which are mentioned are java based, like spring boot admin, etc. which is not suitable here.
I've got a very locked down egress firewall which restricts access to sites I specify. I've noticed the addresses I've specified show that there's an update available for plugins, but are incapable of retrieving the .hpi file. I can extend it, but tcpdump details a variety of endpoints, I'd rather not open up egress to the half the internet.
So my question is: Can I host the plugins myself and sync these plugins from from a single jenkins source address? Is there an accepted way of doing this? I've read about running a jenkins job to change the json file (messy). Anyone got anything better
We have application deployed in K8S pod and all logs are being monitored in ELK stack. Now we have one application which is using external *.jar which is writing logs in one file local to container path. How I can send this logs to kubernetes console so that it will come to elastic search monitoring.
Any help is much appreciated!.
Now we have one application which is using external *.jar which is writing logs in one file local to container path. How I can send this logs to kubernetes console so that it will come to elastic search monitoring.
There are three ways, in increasing order of complexity:
Cheat and symlink the path it tries to log to as /dev/stdout (or /proc/1/fd/0); sometimes it works and it's super cheap, but if the logging system tries to seek to the end of the file, or rotate it, or catches on that it's not actually a "file", then you'll have to try other tricks
If the app uses a "normal" logging framework, such as log4j, slf4j, logback, etc, you have a better-than-average chance of being able to influence the app's logging behavior via some well placed configuration files or in some cases environment variables
Actually, you know, ask your developers to configure their application according to the 12 Factor App principles and log to stdout (and stderr!) like a sane app
Without more specifics we can't offer more specific advice, but that's the gist of it
Some general questions about the docker nodemcu-build process:
Is there a way to specify which modules are included in the build? (similar to the way the cloud build service works)
Is there a way to include a description that will appear when the resultant firmware is run?
Is SSL enabled?
The size of the bin file created by the docker nodemcu-build process (from dev branch source) is 405k. A recent build using the cloud service resulted in a bin file of size 444k. The cloud service build only included the following modules: cjson, file, gpio, http, net, node, tmr, uart, wifi, ssl. Why is the docker build bin file, that contains all modules(?), smaller than the cloud build bin file that only contains 10 modules? (i am concerned that my local docker build version is missing something - even though the build process was error free).
You specify the modules to be built by uncommenting them in the /app/include/user_modules.h file in the source tree. The default build from the source tree is relatively minimal - not an "all modules" build.
The banner at connection is the "Version" field. The nodemcu-build.com builds change this out for custom text. It is defined in /app/include/user_version.h as the USER_VERSION define. You'll need to embed "\n" newlines in the string to get separate lines.
Yes, the Net module can include limited SSL support (TLS 1.1 only) (TLS 1.2 in dev per Marcel's comment below). You need to enable it in /app/include/user_config.h by defining CLIENT_SSL_ENABLE.
The default module and config setup in user_modules.h / user_config.h is different than the defaults on nodemcu-build.com, so the builds are not likely to be the same out of the box.
I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.