Expose metrics for Kotlin using JMX_Exporter to prometheus - docker

I am trying to use JMX_Exporter for my kotlin code to expose metrics to prometheus in order to show it in Grafana. I have gone through many articles and tried to understand how could it be done. I have found below two links useful and trying to achieve using those.
https://github.com/prometheus/jmx_exporter
https://www.openlogic.com/blog/monitoring-java-applications-prometheus-and-grafana-part-1
What i did so far is, created a folder 'prometheus-jmx' in root directory and added mentioned JAR and config.yml file in that folder. Then, i added below parameter to my dockerfile.
CMD java -Xms128m -Xmx256m -javaagent:./jmx_prometheus_javaagent-0.12.0.jar=8080:./config.yml -Dconfig.file=config/routing.conf -cp jagathe-jar-with-
dependencies.jar:./* com.bmw.otd.agathe.AppKt
My prometheus is running in my OpenShift cluster along with my application. I could scrape metrics for my other applications/deployments like Jenkins, SonarQube etc without any modifications in deployment.yml of Prometheus.
My application is running properly now on OpenShift and from application's pod, I can scrape metrics by using below command.
curl http://localhost:portNumber
But on promethus UI, I cant see any JVM or JMX related metric.
Can someone please tell me where and what am i doing wrong? Any kind of help would be appreciated.

After trying many things, I came to know that I needed to expose the port of my application's container in order to let Prometheus or other deployments to know. After exposing the port, I could see my application under targets on Prometheus and I could scrape all JMX and JVM metrics. Hope this would help someone in future...

Related

How to enable broker security in confluent-kafka?

i am trying to enable security in kafka.
i tried with Apache Kafka it worked fine,But now we are using confluent-platform docker image to get all confluent services.
Here i dont know how to enable the kafka ssl security ?
i checked in broker container etc/kafka/ but i didnt no in which file we need change the properties
because there are two files 1)Kafka.properties 2)server.properties
so i am so much confused,
can anyone share your suggestion on this?
What docker image do you use? Please, make sure that you are pulling them from hub.docker.com/r/confluentinc/cp-kafka
Usually, the configuration file for Apache Kafka brokers is server.properties. You do not need to inject the whole config file though. You can configure it with environment variables passed to the container. Please, see cp-demo/blob/6.1.0-post/docker-compose.yml as an example.

How do I implement Prometheus monitoring in Openshift projects?

We have an openshift container platform url that contains multiple projects like
project1
project2
project3
Each project contains several pods that we are currently monitoring with NewRelic like
pod1
pod2
pod3
We are trying to implement Prometheus + Grafana for all these projects separately.
It's too confusing with online articles as none of them described with the configuration that we have now.
Where do we start?
What do we add to docker images?
Is there any procedure to monitor the containers using cAdvisor on openshift?
Some say we need to add maven dependency in project. Some say we need to modify the code. Some say we need to add prometheus annotations for docker containers. Some say add node-exporter. What is the node-exporter in first place? Is it another container that looks for containers metrics? Can I install that as part of my docker images? Can anyone point me to an article or something with similar configuration?
Your question is pretty broad, so the answer will be the same :)
Just to clarify - in your question:
implement Prometheus + Grafana for all these projects separately
Are going to have for each project dedicated installation of Kubernetes? Prometheus + Grfana? Or you are going to have 1 cluster for all of them?
In general, I think, the answer should be:
Use Prometheus Operator as recommended (https://github.com/coreos/prometheus-operator)
Once operator installed - you'll be able to get most of your data just by config changes - for example, you will get the Grafa and Node Exporters in the cluster by single config changes
In our case (we are not running Open Shift, but vanilla k8s cluster) - we are running multiple namespaces (like your projects), which has it representation in Prometheus
To be able to monitor pod's "applicational metrics", you need to use Prometheus client for your language, and to tell Prometheus to scrape the metrics (usually, it is done by ServiceMonitors).
Hope this will shed some light.

Docker container with Elk stack to browse nginx and tomcat log files

I am trying to debug a production failure involving (multiple) nginx and tomcat logs. I have copied the logs to my dev machine. What is the easiest way for me to import these logs into an elastic/ELK stack to sift through quickly? (Currently, I'm making do with less commands across multiple windows)
So far I've found only generic docker containers (like https://elk-docker.readthedocs.io/) that require me to install filebeat and configure it. However, since my data is static, I would prefer a simpler installation.
What I did earlier is create the ELK stack with docker-compose and ingest the data via 'nc' (netcat). An example can be found at: https://github.com/deviantony/docker-elk
You might want to adjust the logstash config, so that it reads and parses your data correctly. If the amount of files is not too big, you can nc them one-by-one and otherwise you can write a small script around it, in bash for example, to loop through the files.

How to run micro services using docker

Am newbie to Spring boot. I need to create micro services and need to run by docker. I have attached my project structure here. Problem which is every time i need to up the micro services manually. For example am having 4 micro services and i just up this services manually. But all micro services should be started itself when deploying into docker. How to achieve this.
Also am using Cassandra database.
I don't know if it is the best solution, but it is the one i used:
First say to the spring boot maven plugin to create an executable jar :
<configuration>
<executable>true</executable>
</configuration>
After that you can add your application as a service in init.d and make it start when the container starts.
You can find a better explaination here : http://www.baeldung.com/spring-boot-app-as-a-service
Please have a look at the numerous tutorials that exists for spring boot and dockerizing this application.
Here is one which explains every step that is necessary
Build Application as Jar File
Create your docker image with Dockerfile
In this dockerfile you create an environment like you would have a new setup linux server and you define what you need for software to run your application: like java. Have a look at existing images like anapsix/alpine-java.
Now think of what you need to do to start your app in this environment: java -jar --some-options -location-of-your-jar.jar
Make sure to be able to reach your app by exposing the docker port so that you can see that is runs.
As I sad if these instruction is not helpful for you, then please read tutorials for docker and dockerizing spring boot applications.
You should use docker-compose. Best way to manage releases/versions and builds is to make own repository for dedicated docker images(nexus is an example).
In docker-compose you can describe all your infrastructure, create services, network, connecting services to communicate other services, so I think you should go this way to create nice developmnet and production build flow for your microservice application
For cassandra and other known services you can find prefered images on https://hub.docker.com.
In each microservice you should have Dockerfile, then in main directory of your solution you can create docker-compose.yml file with services definitions.
You can build your microservices in docker container too. Read more about "Java application build flow with docker" in google.
All about docker compose you can find here: https://docs.docker.com/compose/
All about docker swarm you can find here: https://docs.docker.com/engine/swarm/

Neo4j server failed to start in openshift

I want to create a social network in django framework in Openshift then I need at least a graph db (like Neo4j)and a relational db (like Mysql). I had trouble in add Neo4j to my project because openshift has not any cartridge for it. then I decide to install it with DIY, but I don't understand the functionality of start and stop files in .openshift/action hooks.Then I doing the following steps to install neo4j on server:
1.ssh to my account:
ssh 1238716...#something-prolife.rhcloud.com
2.go in a folder that have permission to write (I go to app-root/repo/ and mkdir test in it) and download the neo4j package from here. and extract it to the test folder that I created before :
tar -xvzf neo4j-community-1.9.4-unix.tar.gz
3.and finally run the neo4j file and start it:
neo4j-community-1.9.4/bin/neo4j start
but I see these logs and can't run the neo4j:
process [3898]... waiting for server to be ready............ Failed
to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
how can I run this database in openshift ? where I am wrong ? and where is the logs in please check the logs?
I've developed an openshift cartridge that fixes the permission issue in openshift. I had to change the class HostBoundSocketFactory and SimpleAppServer in neo4j just to bind without using the 0 port, but using an openshift available port.
You can check at: https://github.com/danielnatali/openshift-neo4j-cartridge
it's working for me.
I would also not place it in the app-root/repo but instead I would put it in app-root/data.
You also need to use the IP of the gear - I think the env. variable is something like OPENSHIFT_INTERAL_IP. 127.0.0.1 is not available for binding but I think the ports should be open.
There are 2 ways neo4j can run : embedded or standalone(exposed via a rest service).
Standalone is what you are trying to do. I think the right way to setup neo4j would be by writing a cartridge for openshift, and then add the cartridge to your gear. There has been some discussion about this, but it seems that nobody has taken the time to do this. Check https://www.openshift.com/forums/openshift/neo4j-cartridge. If you decide to write your own cartridge, i might assist. Here are the docs: https://www.openshift.com/developers/download-cartridges.
The other option is running in embedded mode, which i have used. You need to set up a Java EE application(because neo4j embedded mode libraries are available only with java), and put the neo4j libraries in your project. Then, you would expose some routes, check for parameters and run your neo4j queries inside the servlets.

Resources