How to monitor JBoss EAP with Prometheus jmx-exporter - monitoring

I want to monitor some JBoss EAP 7 servers with Prometheus/Grafana (as well as some Wildfly).
I understand I have to use jmx_exporter.
Should I use it as embedded (agent) or side-car (http)?
Which configuration file?

I was able to scrape the metrices using this config
Add the below config to your startup script or standalone.conf
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=org.jboss.byteman,org.jboss.logmanager -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dorg.jboss.logging.Logger.pluginClass=org.jboss.logging.logmanager.LoggerPluginImpl"
JAVA_OPTS="$JAVA_OPTS -Xbootclasspath/p:$JBOSS_HOME/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.0.3.Final-redhat-1.jar"
JAVA_OPTS="$JAVA_OPTS -javaagent:/path/to/exporter/jmx_exporter.jar=10001:/path/to/config/config.yaml"

It's recommended to use jmx-exporter embedded in the Java JVM virtual machine (use -javaagent). That's easier, more robust and gives better insights.
The configuration file depends on the version (JBoss use undertow... that's a different mBeans to collect than JBoss 6).
The jmx-exporter project provides an example configuration file for WildFly 10 example_configs/wildfly-10.yaml.
However, if you use "JBoss EAP for Openshift" containers images, the jmx-exporter agent is already embedded in the containers (set variables: AB_PROMETHEUS_ENABLE=true and sometimes this one too JAVA_OPTS_APPEND=-Dwildfly.statistics-enabled=true)
If you don't use Red Hat's container images, you can still use the ssame jmx-exporter configuration files. Those files (jmx-exporter-config.yaml) are open-source and available on github:
on master branch JBoss 7.3
on older branches JBoss 6.4 (and 7.1 and 7.2)

Related

Installing Jenkins on mac o/s Catalina with Java 13 already installed

I installed java 13 prior and now need to install java 8 on my mac. As a newbie can I have 2 java versions on my machine and if yes then how can I make Jenkins installed for which java 8 is must OR is there a way to install Jenkins with java 13.
You can have multiple JRE or JDK installed in your machine (standard BE practice) but you can refer only one with your environment variable (usually the latest).
That means that **when you want to run something with java8 you will have to call it using the full path instead of just 'java' **.
I strongly recommend you to use docker and the container to run jenkins , mapping the home folder to a folder in your machine. This will give you full portability and easier upgrades / rollbacks. link
PS: welcome to SO !

Is it possible to reference an SDK ( or any folder ) in a Docker Container from the Host computer?

Short description:
Is it possible to reference an SDK ( or any folder ) in a Docker Container from the Host computer?
Long description:
My team and I work in different environments ( Windows & Mac ) and on different stacks ( Asp .Net MVC / Elixir & Phoenix )
I'm trying to help everyone by creating separate Docker Stacks for each solution ( or group of projects )
What I have been able to do is set up the Docker Stacks so that each solution can be run in 1 or more Docker Containers and the developers can work on the code locally ( using direct host path mounts/volumes ) using an IDE of their choosing.
The issue is different solutions use different SDKs or even different versions of the same SDKs.
So what I would like to do is it up so that anyone in the team could reference the SDK installed on the Docker Container instead of installing the SDKs and each version of the SDKs they need for all the projects.
As far as I can tell, if I create a host mount binding, it will overwrite what's in the container with what's in the host, but I'd like to do it the other way round, I'd like to create a binding between the Docker Container and the Host and have the contents in the Docker Container show up in the Host.
Is this possible? Is there a better way to achieve this?
SDK images from vendors (e.g. asp.net core SDK images from Microsoft) best recommended compile/build time purpose and its lightweight version recommended for runtime in hosting/deployment environment.
Sole purpose of compile/build SDK images is for creating docker runtime images at build stage especially if target runtime OS (linux) is different than development machine OS e.g. Windows. If used efficiently with multistage builder pattern inside dockerfile, can create much lightweight runtime images for hosted environments.
e.g. aspnet.core SDK images used for building docker images and then run locally with host:guest port mapping. But if the dev machine OS is any linux distro then using SDK images is better as you can test validate in multiple SDK images. And these images just need exact name and docker deamon would download auto and use whenever required - that's it but certainly would needs good IDE orchestration support e.g. Visual Studio provides for docker based development on windows 10. or-else simply use docker CLI for build run.
Hope this helps clarify your need if not a solution

jenkins installation in Solaris server

How to install the jenkins on a solaris server? i found articles that this cannot be done as jenkins has discontinued support for solaris.
Even though official IPS repositories for Solaris are discontinued, you can still run Jenkins in Solaris via the jenkins webapp (jenkins.war). To quote from Jenkins installation doc:
Solaris, OmniOS, SmartOS, and other siblings
Generally it should
suffice to install Java 8 and download the jenkins.war and run it as a
standalone process or under an application server such as Apache
Tomcat.
Some caveats apply:
Headless JVM and fonts: For OpenJDK builds on minimalized-footprint
systems, there may be issues running the headless JVM, because Jenkins
needs some fonts to render certain pages.
ZFS-related JVM crashes: When Jenkins runs on a system detected as a
SunOS, it tries to load integration for advanced ZFS features using
the bundled libzfs.jar which maps calls from Java to native libzfs.so
routines provided by the host OS. Unfortunately, that library was made
for binary utilities built and bundled by the OS along with it at the
same time, and was never intended as a stable interface exposed to
consumers. As the forks of Solaris legacy, including ZFS and later the
OpenZFS initiative evolved, many different binary function signatures
were provided by different host operating systems - and when Jenkins
libzfs.jar invoked the wrong signature, the whole JVM process crashed.
A solution was proposed and integrated in jenkins.war since weekly
release 2.55 (and not yet in any LTS to date) which enables the
administrator to configure which function signatures should be used
for each function known to have different variants, apply it to their
application server initialization options and then run and update the
generic jenkins.war without further workarounds. See the libzfs4j Git
repository for more details, including a script to try and "lock-pick"
the configuration needed for your particular distribution (in
particular if your kernel updates bring a new incompatible libzfs.so).
Also note that forks of the OpenZFS initiative may provide ZFS on
various BSD, Linux, and macOS distributions. Once Jenkins supports
detecting ZFS capabilities, rather than relying on the SunOS check,
the above caveats for ZFS integration with Jenkins should be
considered.

Deploy features.xml in servicemix during jenkins Build

I have my features.xml file in src/main/resources/features folder , when I build my project through Jenkins after building my bundle goes to the nexus repository , my requirement is that after my bundle goes to nexus then features.xml should automatically be deployed on servicemix as part of build only. I should not open the servicemix console to install the feature. Please help
You may think about using a KAR (KAraf aRchive).
More information can be found here: http://karaf.apache.org/manual/latest-3.0.x/users-guide/kar.html
You can build а KAR (through Jenkins), containing your feature, then you can use a hot deployment.
Apache Karaf also provides a KAR deployer. It means that you can drop
a KAR file directly in the deploy folder.
Apache Karaf will automatically install KAR files from the deploy
folder. You can change the behaviours of the KAR deployer in the
etc/org.apache.karaf.kar.cfg:
I have also been working on this and my solution was to turn to automated scripting to accomplish this. I wrote a ssh and FTP based program which would stop an smx, delete the ${karaf.home}/data/cache/ directory, replace the new feature file with the one retrieved from the ftp operation, then restart the karaf container.
If you are open to looking into other possibilities:
You can look into Fuse Fabric which can link many smx Containers together and implement version increases and rollbacks. Currently I believe this would also need scripting to accomplish it automatically.
The third option is relatively new and comes in the form of Building docker images and deploying them via OpenShiftV3 which was just unveiled at the Redhat Summit 2015. Its worth noting its fairly new, but it does pack a very impressive feature set.

Do I need to install Tomcat and MySQL on the Linux server to deploy Grails app?

My Grails app is based on
Gradle with Grails 2.4.4,
Tomcat plugin 7.0.55,
and MySQL plugin(mysql:mysql-connector-java:5.1.29).
Do I need to install Tomcat on the server?
Do I need to install MySQL on the server?
Both Tomcat and MySQL are not installed on dev environment(on my PC), but it seems working.
Container
While all the other answers pointed out, that you need already a container (which of course is true) there is also the option to use one of the "standalone" plugins (like e.g. https://grails.org/plugin/standalone). This will package your app as a fat jar, where the container and your app are part of a jar, that you simply run by java -jar myapp.jar (of course your would integrate that into your regular startup scripts on the server).
This is in general no bad option, since many WAR-deployed apps don't need any of the full blown container features anyway and you would be able to configure everything in place for your workload and don't have to compromise for all running wars (or your ops team). On the downside, if there is a security problem etc. with the container you would have to roll a new jar.
/With grails 3, which uses Spring Bootstrap, this even is a default option, since the preferred way of deploying. Spring Boot 1.2 supports Tomcat, Jetty, and Undertow by default./
Database
You can use a MySQL from "somewhere" else. But this is nitpicking, since you really need a MySQL somewhere (BTW: you really should start using MySQL also for your dev env, or you will be in for a few surprises once you put your stuff over to production).
Also be aware, that you can also keep using your H2 (see your datasource config) with files. This is an OK option (that saves you from installing a DB server) for small amounts of data you are storing and also there are other free database servers like PostgreSQL.
Obviously you have to install mysql and tomcat on the server.
During development you run grails from console, so you dont need tomcat as it will use embedded tomcat but still you need to have mysql installed, if you want to use mysql.
But on production, you create a war of your app using 'grails war command' and you deploy this war to a web container just like any other war, so you need tomcat and you will need mysql installed too.
In one word answer is 'Yes'.
Fact is when you are in development environment grails uses as an embedded tomcat server provided by the 'Apache Tomcat plugin' which version corresponds to grails version.
You've not installed mysql and you claimed 'it seems working'. That's funny! But it's not mysql who is working without being installed(!), rather it's also an integrated database provided by the 'H2 Database Plugin'.
So, when you'll deploy your grails app in Linux or another server certainly you need a tomcat server to handle user request to that app and a database where your data will be saved.

Resources