How to read in other config information into a dropwizard service - dropwizard

I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton

Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.

Related

How can I tell what options are in use on a running Mosquitto Service

How can I tell if the settings files associated with a Mosquitto instance, have been properly applied?
I want to add a configuration file to the conf.d folder to override some settings in the default file, but I do not know how to check they have been applied correctly once the Broker is running.
i.e. change persistence to false (without editing the standard file).
Test it.
You can run mosquitto with verbose output enabled, which will generally give you feedback on what options were set, but don't just believe that.
To do that, stop running Mosquitto as a service (how you do this depends on you setup) and manually run it from the shell with the -v option. Be sure to point it at the correct configuration file with the -c option.
That's not enough to be sure that it's actually working properly. To do that you need to test it.
Options have consequences or we wouldn't use them.
If you configure Mosquitto to listen on a specific port, test it by trying to connect to that port.
If you configure Mosquitto to require secure connections on a port, test it by trying to connect to the port unsecured (this shouldn't work) and secured (this should work).
You should be able to devise relatively simple tests for any options you can set in the configuration file. If you care if it's actually working, don't just take it on faith; test it.
For extra credit you can bundle the tests up into a script so that you can run an entire test suite easily in the future and test your Mosquitto installation anytime you make changes to it.
Having duplicate configuration options with different values is a REALLY bad idea.
The behaviour of mosquitto is not defined in this case, which value should be honoured, the first found, the last? When using the conf.d directory, what order will the files be loaded in?
Also will you always remember that you have changed the value in a conf.d file in the future when you go looking?
If you want to change one of the defaults in the /etc/mosquitto/mosquitto.conf file then edit that file. (Any sensible package management system will notice the file has been changed and ask what to do at the point of upgrade)
The conf.d/ directory is intended for adding extra listeners.
Also be aware that there really isn't a default configuration file, you must always specify a configuration file with the -c command line option. The file at /etc/mosquitto/mosquitto.conf just happens to be the config file that is passed when mosquitto is started as a service when installed using most Linux package managers. (The default Fedora install doesn't even setup the /etc/mosquitto/conf.d directory)

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

Why isn't telegraf reading environmental variables?

My goal is to put my telegraf config into source control. To do so, I have a repo in my user's home directory with the appropriate config file which has already been tested and proven working.
I have added the path to the new config file in the "default" environment variables file:
/etc/default/telegraf
like this:
TELEGRAF_CONFIG_PATH="/home/ubuntu/some_repo/telegraf.conf"
... as well as other required variables such as passwords.
However, when I attempt to run
telegraf --test
It says No config file specified, and could not find one in $TELEGRAF_CONFIG_PATH etc.
Further, if I force it by
telegraf --test --config /home/ubuntu/some_repo/telegraf.conf
Then the process fails because it is missing the other required variables.
Questions:
What am I doing wrong?
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
What am I doing wrong?
See what user:group owns /etc/default/telegraf. This file is better used when running telegraf as a service via systemd. Additionally, if you run env do you see the TELEGRAF_CONFIG_PATH variable? What about your other variables? If not, then you probably need to source the file first.
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Yes! Take a look at all the options of telegraf with telegraf --help and you will find:
--config-directory <directory> directory containing additional *.conf files
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
That is not the method I would suggest going down. Check out the config directory option above I mentioned.
Ok, after a LOT of trial and error, I figured everything out. For those facing similar issues, here is your shortcut to the answer:
Firstly, remember that when adding variables to the /etc/default/telegraf file, it must effectively be reloaded. So for example using ubuntu systemctl, that requires a restart.
You can verify that the variables have been loaded successfully using this:
$ sudo strings /proc/<pid>/environ
where <pid> is the "Main PID" from the telegraf status output
Secondly, when testing (eg telegraf --test) then (this is the part that is not necessarily intuitive and isn't documented) you will have to ALSO load the same environmental variables into the current user (eg: SET var=value) such that running
$ env
shows the same results as the previous command.
Hint: This is a good method for loading the current env file directly rather than doing it manually.

How to change *.properties after ant production

I have 2 .properties files for my project on hybris .
First one is used for CI process and as a result a got 4 zip files with my already built platform(after ant production).
On my prod instance i need to switch to another properties because there are all my connections to extended services such as mysql solr.. etc
How i can do that without running all ANT steps.
. ./setantenv.sh && sync && ant config -Denv=my_new_properties
then ./hybrisserver.sh start doesn't work.
There is no information on wiki https://cxwiki.sap.com/display/release5/ant+production+improvements
Check if Updating Configuration Settings at Runtime will be useful for you. You will need to use the FileBasedConfigLoader class and the runtime.config.file.path property.
Other best practices include using system variables for secure settings like DB URL. See "Using Environment Variables instead of Files for Secure Settings" section in Configuring the Behavior of SAP Commerce.
Another option you can look at is to have different config folders for different environments (e.g. config-dev, config-prd), and pass it to ant. e..g -Denv=config--dev

Not able to use linterconfigs command

I want to use Schemacrawler lint in my project and wants to use custom lints only. Based on documentation , it says we can use command -linterconfigs=[path to linter XML configuration file] But when I tried creating XML configuration file and use the custom lints only, I still see default linters are running. Am I doing anything wrong ?
Here is the steps I followed:
download and unzip the package
create dump database named example.database
created schemacrawler-linter-configs.xml with one of the existing lint
Using following command to run the lint on dump database from _schemacrawler directory
./schemacrawler.sh --server=postgresql -command=lint -linterconfigs=schemacrawler-linter-configs.xml -database=example.database
Rashmi, SchemaCrawler runs all linters by default. You need to turn off any linters you don't need in the linters config file. Here is an example of how you can do it:
<schemacrawler-linter-configs>
<linter id="schemacrawler.tools.linter.LinterForeignKeyMismatch">
<run>false</run>
</linter>
</schemacrawler-linter-configs>
Sualeh Fatehi, SchemaCrawler

Resources