My goal is to put my telegraf config into source control. To do so, I have a repo in my user's home directory with the appropriate config file which has already been tested and proven working.
I have added the path to the new config file in the "default" environment variables file:
/etc/default/telegraf
like this:
TELEGRAF_CONFIG_PATH="/home/ubuntu/some_repo/telegraf.conf"
... as well as other required variables such as passwords.
However, when I attempt to run
telegraf --test
It says No config file specified, and could not find one in $TELEGRAF_CONFIG_PATH etc.
Further, if I force it by
telegraf --test --config /home/ubuntu/some_repo/telegraf.conf
Then the process fails because it is missing the other required variables.
Questions:
What am I doing wrong?
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
What am I doing wrong?
See what user:group owns /etc/default/telegraf. This file is better used when running telegraf as a service via systemd. Additionally, if you run env do you see the TELEGRAF_CONFIG_PATH variable? What about your other variables? If not, then you probably need to source the file first.
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Yes! Take a look at all the options of telegraf with telegraf --help and you will find:
--config-directory <directory> directory containing additional *.conf files
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
That is not the method I would suggest going down. Check out the config directory option above I mentioned.
Ok, after a LOT of trial and error, I figured everything out. For those facing similar issues, here is your shortcut to the answer:
Firstly, remember that when adding variables to the /etc/default/telegraf file, it must effectively be reloaded. So for example using ubuntu systemctl, that requires a restart.
You can verify that the variables have been loaded successfully using this:
$ sudo strings /proc/<pid>/environ
where <pid> is the "Main PID" from the telegraf status output
Secondly, when testing (eg telegraf --test) then (this is the part that is not necessarily intuitive and isn't documented) you will have to ALSO load the same environmental variables into the current user (eg: SET var=value) such that running
$ env
shows the same results as the previous command.
Hint: This is a good method for loading the current env file directly rather than doing it manually.
Related
How can I tell if the settings files associated with a Mosquitto instance, have been properly applied?
I want to add a configuration file to the conf.d folder to override some settings in the default file, but I do not know how to check they have been applied correctly once the Broker is running.
i.e. change persistence to false (without editing the standard file).
Test it.
You can run mosquitto with verbose output enabled, which will generally give you feedback on what options were set, but don't just believe that.
To do that, stop running Mosquitto as a service (how you do this depends on you setup) and manually run it from the shell with the -v option. Be sure to point it at the correct configuration file with the -c option.
That's not enough to be sure that it's actually working properly. To do that you need to test it.
Options have consequences or we wouldn't use them.
If you configure Mosquitto to listen on a specific port, test it by trying to connect to that port.
If you configure Mosquitto to require secure connections on a port, test it by trying to connect to the port unsecured (this shouldn't work) and secured (this should work).
You should be able to devise relatively simple tests for any options you can set in the configuration file. If you care if it's actually working, don't just take it on faith; test it.
For extra credit you can bundle the tests up into a script so that you can run an entire test suite easily in the future and test your Mosquitto installation anytime you make changes to it.
Having duplicate configuration options with different values is a REALLY bad idea.
The behaviour of mosquitto is not defined in this case, which value should be honoured, the first found, the last? When using the conf.d directory, what order will the files be loaded in?
Also will you always remember that you have changed the value in a conf.d file in the future when you go looking?
If you want to change one of the defaults in the /etc/mosquitto/mosquitto.conf file then edit that file. (Any sensible package management system will notice the file has been changed and ask what to do at the point of upgrade)
The conf.d/ directory is intended for adding extra listeners.
Also be aware that there really isn't a default configuration file, you must always specify a configuration file with the -c command line option. The file at /etc/mosquitto/mosquitto.conf just happens to be the config file that is passed when mosquitto is started as a service when installed using most Linux package managers. (The default Fedora install doesn't even setup the /etc/mosquitto/conf.d directory)
I do something like this
-javaagent:/usr/local/lib/perfino/perfino.jar=server=ybperfino,name=${HSTNAMESHORT}-${APPNAME},group=${YBENV}/${HSTNAMESHORT},logMBean=10,logFile=${LOG_DIR}/perfinologs/${HSTNAMESHORT}-${APPNAME}.log
basically I want the log files to be created in the log directory for the app not the home directory for the userid
but it seems like the log file isn't being created either with logfile argument or with out !
using java11 if that makes any difference.
Found the answer - I had a competing java agent that was loading before it.
After I changed the order both java agents worked.
I am trying to configure EJBCA 6.15.2.1 on Wildfly 12.0.0.Final inside a Docker container with the help of EJBCA .properties files. In $EJBCA_HOME/conf/externalra-gui.properties.sample there is a comment showing that one of the default settings is: appserver.home=${env.APPSRV_HOME}. I tried to set other options in a similar way, e. g. in database.properties: database.datasource=${env.WF_DATASRC}.
I run ant clean deployear and it didn't deploy my EJBCA instance properly at first - server.log showed that there is no datasource under the name "${env.WF_DATASRC}". It proceeded correctly after I'd changed the line to: database.datasource=ejbcads, which is the exact value of the variable and the name of the data source inside the WildFly server.
I get similar errors during further installation steps. Is there another way of setting EJBCA configuration using environment variables?
I've setup a data pipeline using divolte.io to stream click data from website to a server. I'm not sure how can I do this for multiple websites because all the streams can get mixed up. Any ideas on how to do this?
On the same server, you need to bind to different ports
Create more than one config file, setting divolte.global.server.port to different values, then run the application with those configs.
In order to set a new config file, it actually needs to be in it's own directory
Divolte Collector will try to find configuration files at startup in the configuration directory. Typically this is the conf/ directory nested under the Divolte Collector installation. Divolte Collector will try to locate the configuration directory at ../conf relative to the startup script. The configuration directory can be overridden by setting the DIVOLTE_CONF_DIR environment variable. If set, the value will be used as configuration directory
Alternatively, you could run the exact same config within many containers/VMs, then use port mappings around that
I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.