Mbean registered but not found in mbean Server - mbeans

I have a problem about the mbeans. I have created a simple mbean and I have registered it on the default mBeanServer that is run (Via eclipse or java -jar mbean.jar) and in the same process if I try to fouund the mbean registered with a simple query:
for (ObjectInstance instance : mbs.queryMBeans(ObjectNameMbean, null)) {
System.out.println(instance.toString());
}
the query retuerns my mbean, but if I start another process and try to search this mbean registered the mbeas is not found! why?
The approch is : (Process that is running)
public static void main(String[] args) throws Exception
{
MBeanServer mbeanServer =ManagementFactory.getPlatformMBeanServer();
ObjectName objectName = new ObjectName(ObjectNameMbean);
Simple simple = new Simple (1, 0);
mbeanServer.registerMBean(simple, objectName);
while (true)
{
wait (Is this necessary?)
}
}
So this is the first process that is running (that has the only pourpose to registry the mbean, because there is another process that want to read these informations.
So I start another process to search this mbean but nothing.
I 'm not using jboss but the local Java virtual Machine but my scope is to deploy this simple application in one ejb (autostart) and another ejb will read all informations.
All suggestions are really apprecciated.
This example should be more useful :
Object Hello:
public class Hello implements HelloMBean {
public void sayHello() {
System.out.println("hello, world");
}
public int add(int x, int y) {
return x + y;
}
public String getName() {
return this.name;
}
public int getCacheSize() {
return this.cacheSize;
}
public synchronized void setCacheSize(int size) {
this.cacheSize = size;
System.out.println("Cache size now " + this.cacheSize);
}
private final String name = "Reginald";
private int cacheSize = DEFAULT_CACHE_SIZE;
private static final int DEFAULT_CACHE_SIZE = 200;
}
Interface HelloBean (implemented by Hello)
public interface HelloMBean {
public void sayHello();
public int add(int x, int y);
public String getName();
public int getCacheSize();
public void setCacheSize(int size);
}
Simple Main
import java.lang.management.ManagementFactory;
import java.util.logging.Logger;
import javax.management.MBeanServer;
import javax.management.ObjectName;
public class Main {
static Logger aLog = Logger.getLogger("MBeanTest");
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
Hello mbean = new Hello();
mbs.registerMBean(mbean, name);
// System.out.println(mbs.getAttribute(name, "Name"));
aLog.info("Waiting forever...");
Thread.sleep(Long.MAX_VALUE);
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
}
So now I have exported this project as jar file and run it as "java -jar helloBean.jar" and by eclipse I have modified the main class to read informations of this read (Example "Name" attribute) by using the same objectname used to registry it .
Main to read :
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
System.out.println(mbs.getAttribute(name, "Name"));
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
But nothing, the bean is not found.
Project link : here!
Any idea?

I suspect the issue here is that you have multiple MBeanServer instances. You did not mention how you acquired the MBeanServer in each case, but in your second code sample, you are creating a new MBeanServer instance which may not be the same instance that other threads are reading from. (I assume this is all in one JVM...)
If you are using the platform agent, I recommend you acquire the MBeanServer using the ManagementFactory as follows:
MBeanServer mbs = java.lang.management.ManagementFactory.getPlatformMBeanServer() ;
That way, you will always get the same MBeanServer instance.

Related

Dependency Injection in Apache Storm topology

Little background: I am working on a topology using Apache Storm, I thought why not use dependency injection in it, but I was not sure how it will behave on cluster environment when topology deployed to cluster. I started looking for answers on if DI is good option to use in Storm topologies, I came across some threads about Apache Spark where it was mentioned serialization is going to be problem and saw some responses for apache storm along the same lines. So finally I decided to write a sample topology with google guice to see what happens.
I wrote a sample topology with two bolts, and used google guice to injects dependencies. First bolt emits a tick tuple, then first bolt creates message, bolt prints the message on log and call some classes which does the same. Then this message is emitted to second bolt and same printing logic there as well.
First Bolt
public class FirstBolt extends BaseRichBolt {
private OutputCollector collector;
private static int count = 0;
private FirstInjectClass firstInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
firstInjectClass = injector.getInstance(FirstInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
count++;
String message = "Message count "+count;
firstInjectClass.printMessage(message);
log.error(message);
collector.emit("TO_SECOND_BOLT", new Values(message));
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declareStream("TO_SECOND_BOLT", new Fields("MESSAGE"));
}
#Override
public Map<String, Object> getComponentConfiguration() {
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 10);
return conf;
}
}
Second Bolt
public class SecondBolt extends BaseRichBolt {
private OutputCollector collector;
private SecondInjectClass secondInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
secondInjectClass = injector.getInstance(SecondInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
String message = (String) tuple.getValue(0);
secondInjectClass.printMessage(message);
log.error("SecondBolt {}",message);
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
}
Class in which dependencies are injected
public class FirstInjectClass {
FirstInterface firstInterface;
private final String prepend = "FirstInjectClass";
#Inject
public FirstInjectClass(FirstInterface firstInterface) {
this.firstInterface = firstInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
firstInterface.printMethod(message);
}
}
Interface used for binding
public interface FirstInterface {
void printMethod(String message);
}
Implementation of interface
public class FirstInterfaceImpl implements FirstInterface{
private final String prepend = "FirstInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Same way another class that receives dependency via DI
public class SecondInjectClass {
SecondInterface secondInterface;
private final String prepend = "SecondInjectClass";
#Inject
public SecondInjectClass(SecondInterface secondInterface) {
this.secondInterface = secondInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
secondInterface.printMethod(message);
}
}
another interface for binding
public interface SecondInterface {
void printMethod(String message);
}
implementation of second interface
public class SecondInterfaceImpl implements SecondInterface{
private final String prepend = "SecondInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Module Class
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(FirstInterface.class).to(FirstInterfaceImpl.class);
bind(SecondInterface.class).to(SecondInterfaceImpl.class);
}
}
Nothing fancy here, just two bolts and couple of classes for DI. I deployed it on server and it works just fine. The catch/problem though is that I have to initialize Injector in each bolt which makes me question what is side effect of it going to be?
This implementation is simple, just 2 bolts.. what if I have more bolts? what impact it would create on topology if I have to initialize Injector in all bolts?
If I try to initialize Injector outside prepare method I get error for serialization.

Wrong Kafka topic names for Spring-Cloud-Function app deployed as part of Spring-Cloud-Data-Flow stream

I have a simple SCDF stream that looks like this:
http --port=12346 | mvmn-transform | file --name=tmp.txt --directory=/tmp
The mvmn-transform is a simple custom transformer that looks like this:
#SpringBootApplication
#EnableBinding(Processor.class)
#EnableConfigurationProperties(ScdfTestTransformerProperties.class)
#Configuration
public class ScdfTestTransformer {
public static void main(String args[]) {
SpringApplication.run(ScdfTestTransformer.class, args);
}
#Autowired
protected ScdfTestTransformerProperties config;
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(Message<?> message) {
Object payload = message.getPayload();
Map<String, Object> result = new HashMap<>();
Map<String, String> headersStr = new HashMap<>();
message.getHeaders().forEach((k, v) -> headersStr.put(k, v != null ? v.toString() : null));
result.put("headers", headersStr);
result.put("payload", payload);
result.put("configProp", config.getSomeConfigProp());
return result;
}
// See https://stackoverflow.com/questions/59155689/could-not-decode-json-type-for-key-file-name-in-a-spring-cloud-data-flow-stream
#Bean("kafkaBinderHeaderMapper")
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
BinderHeaderMapper mapper = new BinderHeaderMapper();
mapper.setEncodeStrings(true);
return mapper;
}
}
This works fine.
But I've read that Spring Cloud Function should allow me to implement such apps without a necessity to specify binding and transformer annotations, so I've changed it to this:
#SpringBootApplication
// #EnableBinding(Processor.class)
#EnableConfigurationProperties(ScdfTestTransformerProperties.class)
#Configuration
public class ScdfTestTransformer {
public static void main(String args[]) {
SpringApplication.run(ScdfTestTransformer.class, args);
}
#Autowired
protected ScdfTestTransformerProperties config;
// #Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#Bean
public Function<Message<?>, Map<String, Object>> transform(
// Message<?> message
) {
return message -> {
Object payload = message.getPayload();
Map<String, Object> result = new HashMap<>();
Map<String, String> headersStr = new HashMap<>();
message.getHeaders().forEach((k, v) -> headersStr.put(k, v != null ? v.toString() : null));
result.put("headers", headersStr);
result.put("payload", payload);
result.put("configProp", "Config prop val: " + config.getSomeConfigProp());
return result;
};
}
// See https://stackoverflow.com/questions/59155689/could-not-decode-json-type-for-key-file-name-in-a-spring-cloud-data-flow-stream
#Bean("kafkaBinderHeaderMapper")
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
BinderHeaderMapper mapper = new BinderHeaderMapper();
mapper.setEncodeStrings(true);
return mapper;
}
}
And now I have a problem - SCDF source and target topic names are ignored by Spring-Cloud-Function apparently, and topics transform-in-0 and transform-out-0 are created instead.
SCDF creates topics that have names like <stream-name>.<app-name> eg something like TestStream123.http and TestStream123.mvmn-transform
Previously they were used for transformer - as it should be, since it is a part of the SCDF stream. But now they are ignored by Spring-Cloud-Function and transform-in-0 and transform-out-0 are created instead.
Thus my transformer no longer receives any input, as it expects it on a wrong Kafka topic. And would probably produce no output to the stream as well, since it outputs to the wrong Kafka topic also.
P.S. Just in case, full project code on GitHub: https://github.com/mvmn/scdftest-transformer/tree/scfunc
In order to run locally start up Kafka, Skipper, SCDF and SCDF console, do mvn clean install in the app folder and then do app register --name mvmn-transform-1 --type processor --uri maven://x.mvmn.study.scdf.scdftest:scdftest-transformer:0.1.1-SNAPSHOT --metadata-uri maven://x.mvmn.study.scdf.scdftest:scdftest-transformer:0.1.1-SNAPSHOTin the coonsole. Then you can deploy stream using definition http --port=12346 | mvmn-transform | file --name=tmp.txt --directory=/tmp
Since you are using the functional model of writing Spring Cloud Stream applications, when you deploy this app, you need to pass two properties on the custom processor to restore the Spring Cloud Data Flow behavior.
spring.cloud.stream.function.bindings.transform-in-0=input
spring.cloud.stream.function.bindings.transform-out-0=output
Can you try that and see if that makes a difference?

Verify Backing Bean values using Arquillian Warp

My goal is to test using Arquillian Warp
1. Already navigated to a JSF page on a previous test
2. On a another test set a text field to a value, using warp i need to inject the ViewScope Bean , and verify the value in the backing bean
Sample Code
#RunWith(Arquillian.class)
#WarpTest
#RunAsClient
public class TestIT {
private static final String WEBAPP_SRC = "src/main/webapp";
private static final String WEB_INF_SRC = "src/main/webapp/WEB-INF";
private static final String WEB_RESOURCES = "src/main/webapp/resources";
#Deployment(testable = true)
public static WebArchive createDeployment() {
File[] files = Maven.resolver().loadPomFromFile("pom.xml")
.importRuntimeDependencies().resolve().withTransitivity().asFile();
WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war")
.addPackages(true, "com.mobitill")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
.addAsWebInfResource(new File(WEB_INF_SRC, "template.xhtml"))
.addAsWebInfResource(new File(WEB_INF_SRC, "jboss-web.xml"))
.addAsWebInfResource(new File(WEB_INF_SRC, "web.xml"))
.addAsWebResource(new File(WEBAPP_SRC, "index.xhtml"))
.addAsWebResource(new File("src/main/webapp/demo", "home.xhtml"), "demo/home.xhtml")
.addAsResource("test-persistence.xml", "META-INF/persistence.xml")
.merge(ShrinkWrap.create(GenericArchive.class).as(ExplodedImporter.class)
.importDirectory(WEB_RESOURCES).as(GenericArchive.class), "resources")
.addAsLibraries(files);
System.out.println(war.toString(true));
return war;
}
#Drone
private WebDriver browser;
#ArquillianResource
private URL deploymentUrl;
#Test
#InSequence(1)
public final void browserTest() throws Exception {
browser.get(deploymentUrl.toExternalForm() + "index");
guardHttp(loginImage).click();
Assert.assertEquals("navigate to home page ", "https://127.0.0.1:8080/citi/demo/home", browser.getCurrentUrl());
}
#Test
#InSequence(2)
public final void homeManagedBean() throws Exception {
Warp
.initiate(new Activity() {
#Override
public void perform() {
WebElement txtMerchantEmailAddress = browser.findElement(By.id("txtMerchantEmailAddress"));
txtMerchantEmailAddress.sendKeys("demouser#yahoo.com");
guardAjax(btnMerchantSave).click();
}
})
.observe(request().header().containsHeader("faces-request"))
.inspect(new Inspection() {
private static final long serialVersionUID = 1L;
#Inject
HomeManagedBean hmb;
#ArquillianResource
FacesContext facesContext;
#BeforePhase(UPDATE_MODEL_VALUES)
public void initial_state_havent_changed_yet() {
Assert.assertEquals("email value ", "demouser#yahoo.com", hmb.getMerchantEmail());
}
#AfterPhase(UPDATE_MODEL_VALUES)
public void changed_input_value_has_been_applied() {
Assert.assertEquals(" email value ", "demouser#yahoo.com", hmb.getMerchantEmail());
}
});
}
}
the error i keep gettting is
org.jboss.arquillian.warp.impl.client.execution.WarpSynchronizationException: The Warp failed to observe requests or match them with response.
There were no requests matched by observer [containsHeader('faces-request')]
If Warp enriched a wrong request, use observe(...) method to select appropriate request which should be enriched instead.
Otherwise check the server-side log and enable Arquillian debugging mode on both, test and server VM by passing -Darquillian.debug=true.
at org.jboss.arquillian.warp.impl.client.execution.SynchronizationPoint.awaitResponses(SynchronizationPoint.java:155)
at org.jboss.arquillian.warp.impl.client.execution.DefaultExecutionSynchronizer.waitForResponse(DefaultExecutionSynchronizer.java:60)
at org.jboss.arquillian.warp.impl.client.execution.WarpExecutionObserver.awaitResponse(WarpExecutionObserver.java:64)
any help will be welcomed or an alternative way of validating a jsf viewscope bean during integration testing
I was able to sort out it was not working and able to create a sample project for future reference if anyone comes by the same problem
Testing using arquillian warp example

How can I convert an Object to Json in a Rabbit reply?

I have two applications communicating with each other using rabbit.
I need to send (from app1) an object to a listener (in app2) and after some process (on listener) it answer me with another object, now I am receiving this error:
ClassNotFound
I am using this config for rabbit in both applications:
#Configuration
public class RabbitConfiguration {
public final static String EXCHANGE_NAME = "paymentExchange";
public final static String EVENT_ROUTING_KEY = "eventRoute";
public final static String PAYEMNT_ROUTING_KEY = "paymentRoute";
public final static String QUEUE_EVENT = EXCHANGE_NAME + "." + "event";
public final static String QUEUE_PAYMENT = EXCHANGE_NAME + "." + "payment";
public final static String QUEUE_CAPTURE = EXCHANGE_NAME + "." + "capture";
#Bean
public List<Declarable> ds() {
return queues(QUEUE_EVENT, QUEUE_PAYMENT);
}
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnectionFactory);
}
#Bean
public DirectExchange exchange() {
return new DirectExchange(EXCHANGE_NAME);
}
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(EXCHANGE_NAME);
r.setChannelTransacted(false);
r.setConnectionFactory(rabbitConnectionFactory);
r.setMessageConverter(jsonMessageConverter());
return r;
}
#Bean
public MessageConverter jsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
private List<Declarable> queues(String... nomes) {
List<Declarable> result = new ArrayList<>();
for (int i = 0; i < nomes.length; i++) {
result.add(newQueue(nomes[i]));
if (nomes[i].equals(QUEUE_EVENT))
result.add(makeBindingToQueue(nomes[i], EVENT_ROUTING_KEY));
else
result.add(makeBindingToQueue(nomes[i], PAYEMNT_ROUTING_KEY));
}
return result;
}
private static Binding makeBindingToQueue(String queueName, String route) {
return new Binding(queueName, DestinationType.QUEUE, EXCHANGE_NAME, route, null);
}
private static Queue newQueue(String nome) {
return new Queue(nome);
}
}
I send the message using this:
String response = (String) rabbitTemplate.convertSendAndReceive(RabbitConfiguration.EXCHANGE_NAME,
RabbitConfiguration.PAYEMNT_ROUTING_KEY, domainEvent);
And await for a response using a cast to the object.
This communication is between two different applications using the same rabbit server.
How can I solve this?
I expected rabbit convert the message to a json in the send operation and the same in the reply, so I've created the object to correspond to a json of reply.
Show, please, the configuration for the listener. You should be sure that ListenerContainer there is supplied with the Jackson2JsonMessageConverter as well to carry __TypeId__ header back with the reply.
Also see Spring AMQP JSON sample for some help.

How to return user defines class in JMX?

I am new to JMX and now I wished to monitor my project with JMX. Is there any way to return user defined class via MBeans/MXBean? I know that OpentType can help but don't know how to used it. I also went through Composite and Tabular data types but it may not work for me because I need to convert each and every class into respective data types.
Please provide your help.
Thank you in advance!!
You have to create an Interface SomethingMBean and a class that implement that interface.
public interface HelloMBean {
public void sayHello();
public int add(int x, int y);
public String getName();
public int getCacheSize();
public void setCacheSize(int size);
}
public class Hello ...
implements HelloMBean {
public void sayHello() {
System.out.println("hello, world");
}
public int add(int x, int y) {
return x + y;
}
public String getName() {
return this.name;
}
public int getCacheSize() {
return this.cacheSize;
}
public synchronized void setCacheSize(int size) {
...
this.cacheSize = size;
System.out.println("Cache size now " + this.cacheSize);
}
...
private final String name = "Reginald";
private int cacheSize = DEFAULT_CACHE_SIZE;
private static final int
DEFAULT_CACHE_SIZE = 200;
}
And then, you have to register your MBean...
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("com.example:type=Hello");
Hello mbean = new Hello();
mbs.registerMBean(mbean, name);
Take a look at Java Tutorial
If your project is using Spring, then it is quite easy to expose any user-defined class to JMX by simply using annotations:
#ManagedResource - to expose the class to JMX
#ManagedAttribute - to expose any fields of a ManagedResource class as attributes
#ManagedOperation - to expose any method of a ManagedResource class
as an operation
Have a look at the official Spring documentation which contains quite a few illustrative examples as well.

Resources