Replace jmx_exporter_config.xml in the Docker image built by Fabric8 maven plugin - jmx

The default Docker image that is being built using mvn package fabric8:build contains /opt/agent_bond/jmx_exporter_config.xml that doesn't include (filters out) JMX beans that I'm interested in exporting to the Prometheus.
How can I replace that file with my own during the build with Fabric8 maven plugin?
I know I can construct the Docker image from scratch or redefine everything using Fabric8 plugin XML config, but that's too invasive in comparison to the default "Zero-config" approach.
Update:
So one way I found is to add custom jmx_exporter_config.yml file to the src/main/fabric8-includes and then add environment variable via Kubernetes deployment in src/main/fabric8/deployment.yml:
spec:
template:
spec:
containers:
-env:
- name: AB_JMX_EXPORTER_CONFIG
value: /deployments/jmx_exporter_config.yml
Unfortunately, the Docker image in this case would still not expose custom metrics by default.
I also cannot add exposed ports when using Spring Boot generator "zero-configuration" option.
Update 2
So the workaround for the ENV entry works ok and probably is sufficient for my use case.
The 4th port that I was trying to expose was the Spring Boot management port (default 8081). But it seams that it is not needed for the Kubernetes liveness probe checking, since Kubelet can access that port even if it is not exposed from the Docker image of the Spring Boot service.

The way you describe is currently the recommended way, but it should be probably easier to add such config files.
For the "zero-config" mode you can set this in the configuration (so not so zero ;-):
<configuration>
<generator>
<config>
<spring-boot>
<webPort>8080</webPort>
<!-- dont expose jolokia -->
<jolokiaPort>-1</jolokiaPort>
<prometheusPort>9999</prometheusPort>
</spring-boot>
</config>
</generator>
</configuration>
If you need additional ports you would have to provide it in your resource yaml fragment. See also https://maven.fabric8.io/#generator-java-exec for the options you can use here.

Related

How to configure Graylog Plugin on bootstrap (non interactive)?

I setup a Graylog server based on the official Graylog 3 Docker image and added the SSO plugin. In principle it works but I have to configure the SSO headers using the UI after each container start.
I see the options to configure Graylog itself using either a server.conf file or environment variables. But I cannot find any way to configure the plugin upfront to get a final image for automatic deployment.
Is there any way to configure Graylog plugins using special config file entries, prefixed environment variables or separate config files?
If you create you're own shell script to update files/settings, you can create a new image based on the original (a new Dockerfile), which, when started, will run the script, modify any relevant settings and start the application-server. Even better if you can have the script take inputs which you can supply as environment variables to the docker container.

Expose metrics for Kotlin using JMX_Exporter to prometheus

I am trying to use JMX_Exporter for my kotlin code to expose metrics to prometheus in order to show it in Grafana. I have gone through many articles and tried to understand how could it be done. I have found below two links useful and trying to achieve using those.
https://github.com/prometheus/jmx_exporter
https://www.openlogic.com/blog/monitoring-java-applications-prometheus-and-grafana-part-1
What i did so far is, created a folder 'prometheus-jmx' in root directory and added mentioned JAR and config.yml file in that folder. Then, i added below parameter to my dockerfile.
CMD java -Xms128m -Xmx256m -javaagent:./jmx_prometheus_javaagent-0.12.0.jar=8080:./config.yml -Dconfig.file=config/routing.conf -cp jagathe-jar-with-
dependencies.jar:./* com.bmw.otd.agathe.AppKt
My prometheus is running in my OpenShift cluster along with my application. I could scrape metrics for my other applications/deployments like Jenkins, SonarQube etc without any modifications in deployment.yml of Prometheus.
My application is running properly now on OpenShift and from application's pod, I can scrape metrics by using below command.
curl http://localhost:portNumber
But on promethus UI, I cant see any JVM or JMX related metric.
Can someone please tell me where and what am i doing wrong? Any kind of help would be appreciated.
After trying many things, I came to know that I needed to expose the port of my application's container in order to let Prometheus or other deployments to know. After exposing the port, I could see my application under targets on Prometheus and I could scrape all JMX and JVM metrics. Hope this would help someone in future...

Deploying a mule application to a mule cluster using docker

I am kind of new to MULE ESB and deployments. I have been doing some trials around deploying mule application to a mule standalone . I am using an approach similar to this
https://dzone.com/articles/dockerizing-clustering-and-queueing-up-with-mule-e
But my question is, if I have a mule cluster where in I deploy my mule proxies and also mule APIs , Is there any way to do that ? How would I bind individual docker images to the same mule cluster? Or If I have individual containers having mule runtime as mentioned in the above approach, How would I bind those containers into same cluster?
Let's break down your questions one by one and let's try to answer it.
if I have a mule cluster where in I deploy my mule proxies and also mule APIs , Is there any way to do that ?
If you have Mule runtime prior to 3.8 version, you would need APi gateway to deploy your API proxy separately. But after Mule 3.8 version, Mulesoft has unified Mule runtime and API gateway, which means, your API proxy can be deployed directly into Mule runtime. You don't need separate API gateway for your proxy.
All the APIs, proxy and policies can be deployed into Mule runtime directly.
https://blogs.mulesoft.com/dev/mule-dev/announcing-mule-3-8-unified-runtime-integration-api-management/
How would I bind individual docker images to the same mule cluster?
If you read the article carefully, you can see in Create a Mule Cluster in Docker section.
There are 2 properties config files for each cluster which defines the properties of each clusters, and the YAML file which binds both the runtime as a cluster.
This YAML file points out both the properties file which describe the properties of each cluster.
There is also a Docker image file that takes the previous base image (described at top FROM anirban-mule-demo) , creates the Mule runtime, and deploys the Mule application based on what is defined there.
When you use the command : docker-compose build the YAML file binds both the runtime build the Mule cluster within the Docker container. In the background the base image is run twice creating 2 different Mule runtime and then the cluster is created using the 2 properties config files describing each cluster properties with individual node.
It actually use the process of creating Mule cluster from properties file which is another way of using Mule cluster.
You can find the example how to create Mule cluster from properties file here
Now at the end you can use docker run command to start both the Mule runtimes in the cluster and the application inside it will be getting different http ports 7082 and 8082 respectively defined in docker run command.
Hope this help

How to register keycloak module in standalone.xml (keycloak on docker)

I'm creating a module for Keycloak and I'm trying to register it using Modules, just how the documentation says to.
How can I register this module on keycloak-server subsystem section of standalone.xml when I'm running Keycloak with Docker?
Start the server during the Docker build. Then run a jboss-cli batch script to modify the configuration.
If >jboss-cli.bat --file=adapter-install.cli doesn't work, then you can add --connect.
Try following command;
jboss-cli.bat --connect --file=adapter-install.cli
You can prepare module.xml manually (you could use examples from JBOSS_HOME/modules/..). Don't forget to specify all required dependencies (keycloak-core, javax ...). Now you can add module.xml and corresponding jars during Dockerfile build. Or add module.xml during image build and add jars as volumes.
Also consider running Jboss scripts in embed mode during image build. As for me there too many preliminary script job running before actual keycloak service is started. I would prefer to bake custom image using only Dockerfile (but use official keycloak docker sources as a reference).
As you are using docker to run keycloak.
You can copy your custom CLIs in the docker file and run them. We mimic what keycloak did in their image and it worked for us even add modules.
https://github.com/jboss-dockerfiles/keycloak/tree/master/server/tools/cli
Our case was adding sentry module http://cloudtrust.io/doc/chapter-keycloak/sentry.html but we didn't follow it Literally.

How to run micro services using docker

Am newbie to Spring boot. I need to create micro services and need to run by docker. I have attached my project structure here. Problem which is every time i need to up the micro services manually. For example am having 4 micro services and i just up this services manually. But all micro services should be started itself when deploying into docker. How to achieve this.
Also am using Cassandra database.
I don't know if it is the best solution, but it is the one i used:
First say to the spring boot maven plugin to create an executable jar :
<configuration>
<executable>true</executable>
</configuration>
After that you can add your application as a service in init.d and make it start when the container starts.
You can find a better explaination here : http://www.baeldung.com/spring-boot-app-as-a-service
Please have a look at the numerous tutorials that exists for spring boot and dockerizing this application.
Here is one which explains every step that is necessary
Build Application as Jar File
Create your docker image with Dockerfile
In this dockerfile you create an environment like you would have a new setup linux server and you define what you need for software to run your application: like java. Have a look at existing images like anapsix/alpine-java.
Now think of what you need to do to start your app in this environment: java -jar --some-options -location-of-your-jar.jar
Make sure to be able to reach your app by exposing the docker port so that you can see that is runs.
As I sad if these instruction is not helpful for you, then please read tutorials for docker and dockerizing spring boot applications.
You should use docker-compose. Best way to manage releases/versions and builds is to make own repository for dedicated docker images(nexus is an example).
In docker-compose you can describe all your infrastructure, create services, network, connecting services to communicate other services, so I think you should go this way to create nice developmnet and production build flow for your microservice application
For cassandra and other known services you can find prefered images on https://hub.docker.com.
In each microservice you should have Dockerfile, then in main directory of your solution you can create docker-compose.yml file with services definitions.
You can build your microservices in docker container too. Read more about "Java application build flow with docker" in google.
All about docker compose you can find here: https://docs.docker.com/compose/
All about docker swarm you can find here: https://docs.docker.com/engine/swarm/

Resources