How could I override configuration value in Apache Flink? - monitoring

I'm trying to gather metrics from Apache Flink into Prometheus. Flink documentation says that I need to add following lines to my flink-conf.yaml:
metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
metrics.reporter.promgateway.host: localhost
metrics.reporter.promgateway.port: 9091
metrics.reporter.promgateway.jobName: myJob
I want to mark different jobs with different names inside of Prometheus. How could I override configuration parameter metrics.reporter.promgateway.jobName on per-job basis (each job is running inside of its own Flink cluster session)?
There is a couple of problems:
I can't override flink-conf.yaml. I've found only FLINK_CONF_DIR parameter to override whole configuration directory. But it doesn't look like a right solution to override configuration directory for every single job.
I can't override initial configuration of StreamExecutionEnvironment because it is being constructed inside of StreamExecutionEnvironment.getExecutionEnvironment method and can't be modified after environment's initialization.

You can modify the effective configuration by specifying a dynamic property when starting a Flink job cluster. Assuming that you are deploying to Yarn the command would look like:
bin/flink run -m yarn-cluster -yD metrics.reporter.promgateway.jobName=myCustomJob <USER_CODE_JAR>
The dynamic properties are sent to the Yarn cluster and overwrite existing configuration key value pairs.

Related

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

EJBCA on Docker: use environmental variables in .properties files

I am trying to configure EJBCA 6.15.2.1 on Wildfly 12.0.0.Final inside a Docker container with the help of EJBCA .properties files. In $EJBCA_HOME/conf/externalra-gui.properties.sample there is a comment showing that one of the default settings is: appserver.home=${env.APPSRV_HOME}. I tried to set other options in a similar way, e. g. in database.properties: database.datasource=${env.WF_DATASRC}.
I run ant clean deployear and it didn't deploy my EJBCA instance properly at first - server.log showed that there is no datasource under the name "${env.WF_DATASRC}". It proceeded correctly after I'd changed the line to: database.datasource=ejbcads, which is the exact value of the variable and the name of the data source inside the WildFly server.
I get similar errors during further installation steps. Is there another way of setting EJBCA configuration using environment variables?

How to setup divolte.io for multiple websites on the same server?

I've setup a data pipeline using divolte.io to stream click data from website to a server. I'm not sure how can I do this for multiple websites because all the streams can get mixed up. Any ideas on how to do this?
On the same server, you need to bind to different ports
Create more than one config file, setting divolte.global.server.port to different values, then run the application with those configs.
In order to set a new config file, it actually needs to be in it's own directory
Divolte Collector will try to find configuration files at startup in the configuration directory. Typically this is the conf/ directory nested under the Divolte Collector installation. Divolte Collector will try to locate the configuration directory at ../conf relative to the startup script. The configuration directory can be overridden by setting the DIVOLTE_CONF_DIR environment variable. If set, the value will be used as configuration directory
Alternatively, you could run the exact same config within many containers/VMs, then use port mappings around that

How to read in other config information into a dropwizard service

I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.

Configuring Sonar plugin for Jenkins

I am having some confusion over configuring Sonar plugin on Jenkins. I went to Manage Jenkins -> Configure System and added Sonar.I am confused about what to put in the Database URL in the Sonar section.
I put
jdbc:mysql://10.4.1.206/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
10.4.1.206 is the node I am connecting to.
However, the port is 3306.
Should I put
jdbc:mysql://10.4.1.206:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true or just leave it like before?
Also, since I am using MySQL, do I need to put com.mysql.jdbc.Driver in the Driver section? It says to leave it blank if I am using embedded default driver.
Please forgive me; this is my first time tampering with both Jenkins and Sonar.
In case you have configured your Sonar to use MySQL, you need to provide both the URL and the driver. The default, embedded database for Sonar is Derby - below you will find a sample of a default sonar configuration:
# Comment the 3 following lines to deactivate the default embedded database
sonar.jdbc.url: jdbc:derby://localhost:1527/sonar;create=true
sonar.jdbc.driverClassName: org.apache.derby.jdbc.ClientDriver
sonar.jdbc.validationQuery: values(1)
So, if you have configured your Sonar to use MySQL, and I can only assume that you had, let's analyze the configuration itself:
The driver that you need to explicitly declare is com.mysql.jdbc.Driver.
Yours URL string looks good to me. According to the MySQL Connect specification:
The JDBC URL format for MySQL Connector/J is as follows, with items in square brackets ([, ]) being optional:
jdbc:mysql://[host][,failoverhost...][:port]/[database] »
[?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]...
If the host name is not specified, it defaults to 127.0.0.1. If the port is not specified, it defaults to 3306, the default port number for MySQL servers.
jdbc:mysql://[host:port],[host:port].../[database] »
[?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]...
In my current setup the connection is as follows:
jdbc:mysql://localhost:3306/radical_sonar?useUnicode=true&characterEncoding=utf8
I tend to use the port number explicitly in order to avoid confusion rather than anything else - we do have a test MariaDB install running on a different port...
In Manage Jenkins > Configure System, your Sonar-Settings should be as follows:
Database URL should be:
jdbc:mysql://10.4.1.206:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
Driver should be:
com.mysql.jdbc.Driver
If you need more information, you might also want to have a look at your "sonarqube/conf/sonar.properties" File and the following documentation link
http://docs.codehaus.org/display/SONAR/Configuring+SonarQube+Jenkins+Plugin
Good Luck with your configuration!

Resources