is it possible to access Wildfly properties (defined in standalone.xml) through JNDI?
Like:
<system-properties>
<property name="MY_PROPERTY" value="some value"/>
...
</system-properties>
and read it in java:
#Resource(lookup = "java:comp/env/MY_PROPERTY")
private String property;
<system-properties> are used to define environment variables and not JNDI variables. Define JNDI variables inside
<subsystem xmlns="urn:jboss:domain:naming:2.0">
<bindings>
...
<simple name="java:/env/MY_PROPERTY" value="some value"/>
</bindings>
</subsystem>
Now, you can read it as a JNDI.
Related
On client machine, I am using spring-rabbit-1.6.7.RELEASE.jar and spring-amqp-1.6.7.RELEASE.jar to perform operations on RabbitMQ.
Now, need is to monitor the metrics like # of open channels, # of rejected messages etc from given client machine to RabbitMQ server.
www.rabbitmq.com/monitoring.html mentions that some client libraries and frameworks provide means of registering metrics collectors or collect metrics out of the box. RabbitMQ Java client and Spring AMQP are two examples.
Please suggest how I can use Spring AMQP to collect metrics with respect to given client machine to RabbitMQ server.
Please note I am using org.springframework.amqp.rabbit.connection.CachingConnectionFactory. But it doesn't have any method to set metric collector..
We are using xml with following tags to define connection factory, queue, binding etc.
Rabbit:Queues , Rabbit:queue-arguments , • Rabbit:DIRECTExchange , Rabbit:TOPICExchange, Rabbit:binding , Rabbit:Admin [[[ ConnectionFactory]]]
e.g.
<bean id="connectionFactory"
class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<constructor-arg value="#{messagingProperties['mq.hostname']}" />
<property name="virtualHost" value="#{messagingProperties['mq.virtual-host']}" />
<property name="username" value="#{messagingProperties['mq.username']}" />
<property name="password" value="#{messagingProperties['mq.password']}" />
<property name="channelCacheSize" value="25" />
</bean>
1.6.7 is extremely old; you should upgrade to at least 1.7.14; the current version is 2.1.8.
You can set the metricsCollector on the underlying rabbit connection factory.
connectionFactory.getRabbitConnectionFactory().setMetricsCollector(...);
Or use the RabbitConnectionFactoryBean to create the underlying connection factory, and then inject it into the CachingConnectionFactory.
<bean id="rcf" class= "...RabbitConnectionFactoryBean">
... set all the properties here
</bean>
<bean id="connectionFactory"
class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<constructor-arg ref="rcf" />
<property name="channelCacheSize" value="25" />
</bean>
I am having trouble getting Camel to work with jndi. I am deploying camel inside of IBM Websphere.
Inside of Websphere there is a jdni connection called "vzw.ds.commerce" that is setup to connect to the database I want to access.
This route below works:
<bean class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" id="publishDB">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
<property name="url"
value="jdbc:oracle:thin:#//server.com:2051/mbschema" />
<property name="username" value="username" />
<property name="password" value="password" />
</bean>
<bean id="commerceDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="vzw.ds.commerce" />
</bean>
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="timer-to-console" customId="true">
<from uri="timer://foo?fixedRate=true&period=10s" />
<transform>
<simple>30004</simple>
</transform>
<process ref="createSQL" />
<to uri="jdbc:publishDB" />
<process ref="processSQL" />
<to uri="stream:out" />
</route>
</camelContext>
However, I want to use the jndi connection and not have the connection information in the route.
When I change the line to:
I get the error:
java.sql.SQLException: invalid arguments in call DSRA0010E: SQL State = null, Error Code = 17,433
The code I posted actually was correct. The problem I had was with the setup on Websphere.
Once I changed the setting on Websphere the code started working.
Activemq's admin console, as standard, points to 0.0.0.0:8161.
I know I can change the port from 8161 in the jetty.xml config file.
Is it possible to change the URL from 0.0.0.0?
The answer was pretty obvious. In jetty.xml:
<bean id="Connector" class="org.eclipse.jetty.server.nio.SelectChannelConnector">
<property name="port" value="8161" />
<property name="host" value="HOSTNAME" />
</bean>
I have a Maven3 project where I'm using the tomcat7-maven-plugin. I would like to set the path for the embedded database via an environment variable argument to the jvm.
Reading the variable with System.getenv("myDataDir") within a Java-Method returns the correct path.
But when I try to set the variable ${myDataDir} in my persistence.xml and then I start tomcat with "mvn tomcat:run" I get FileNotFoundExceptions because the variable is not replaced with the actual value (it says e.g. Cannot find path for ${myDataDir}\derby.log)
I don't know what's causing this - if it's the persistence provider (EclipseLink) that doesn't support this or if it's something else.
My persistence.xml looks like this:
<persistence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0" xmlns="http://java.sun.com/xml/ns/persistence">
<persistence-unit name="myPersistence" transaction-type="RESOURCE_LOCAL">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<properties>
<property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver" />
<property name="javax.persistence.jdbc.url" value="jdbc:derby:${myDataDir}/DB;create=true;upgrade=true" />
<property name="javax.persistence.jdbc.user" value="admin" />
<property name="javax.persistence.jdbc.password" value="password" />
<property name="eclipselink.ddl-generation" value="create-tables" />
<property name="eclipselink.ddl-generation.output-mode" value="both" />
<property name="eclipselink.logging.level" value="SEVERE" />
<property name="eclipselink.logging.file" value="${myDataDir}/derby.log" />
<property name="eclipselink.application-location" value="${myDataDir}/dbScripts" />
</properties>
</persistence-unit>
</persistence>
EDIT:
I forgot to mention, that I'm in a Spring 3 Framework environment. According to the examples on the web, this should be capable of using environment variables in persistence.xml...
Using ${myDataDir} in a database URL is not valid, unless supported by your database, or unless you are translating the variable during your own build scripts.
What you can do is pass a properties map to Persistence.createEntityManagerFactory() that has the URL you want, that you must build at runtime in code.
I need suggestions for creating Ant build for multiple servers. I have about 25 servers. If possible, I would like to implement deployment war file for all the servers by running ant once. I have the following issues to consider
The configuration parameters are not the same for all servers.
Some configuration parameters I have to set server host ip on which app is deployed. With 25
servers, want some suggestions on how to deal with this.
You could hand code the logic to do this in Ant, but it might be lot of work depending on how different your server configurations are. Instead, I'd recommend looking at using a proper configuration management tool such as Chef or Puppet to automate your deployments and just use Ant to build the files that are deployed.
I had the same objectives.
You can either code a maven script in order to set up the continuous integration on Jenkins as mentionned by Jayan
or you can create an ANT script like you mentionned.
<!-- Define custom properties -->
<property name="build.dir" location="${basedir}/target" />
<property name="host.dev" value="YOUR IP" />
<property name="host.live" value="YOUR IP 2" />
<property name="ssh.timeout" value="60000" />
<property name="username.dev" value="username" />
<property name="username.live" value="username 2" />
<property name="password.dev" value="password" />
<property name="password.live" value="password 2" />
Create you own ssh macrodef task in order to use ssh commands:
<!-- Define ssh commands sshexec -->
<macrodef name="ssh_cmd">
<attribute name="host" />
<attribute name="command" />
<attribute name="usepty" default="false" />
<attribute name="username" />
<attribute name="password" />
<sequential>
<echo>Executing command : #{command} on #{username}###{host}</echo>
<sshexec host="#{host}" failonerror="true" username="#{username}" password="#{password}" timeout="${ssh.timeout}" command="#{command}" usepty="#{usepty}" trust="true" />
</sequential>
</macrodef>
Send a command to your server like :
<ssh_cmd host="${host.dev}" command="YOUR COMMAND (ex: sudo start yourservice onlinux)" username="${username.dev}" password="${password.dev}"/>
Don't forget to import sshexec / scp ant tasks with something like :
<property environment="env" />
<taskdef resource="net/jtools/classloadertask/antlib.xml">
<classpath>
<fileset dir="${ant.home}/lib" includes="ant-classloader*.jar" />
</classpath>
</taskdef>