My goal is to test using Arquillian Warp
1. Already navigated to a JSF page on a previous test
2. On a another test set a text field to a value, using warp i need to inject the ViewScope Bean , and verify the value in the backing bean
Sample Code
#RunWith(Arquillian.class)
#WarpTest
#RunAsClient
public class TestIT {
private static final String WEBAPP_SRC = "src/main/webapp";
private static final String WEB_INF_SRC = "src/main/webapp/WEB-INF";
private static final String WEB_RESOURCES = "src/main/webapp/resources";
#Deployment(testable = true)
public static WebArchive createDeployment() {
File[] files = Maven.resolver().loadPomFromFile("pom.xml")
.importRuntimeDependencies().resolve().withTransitivity().asFile();
WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war")
.addPackages(true, "com.mobitill")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
.addAsWebInfResource(new File(WEB_INF_SRC, "template.xhtml"))
.addAsWebInfResource(new File(WEB_INF_SRC, "jboss-web.xml"))
.addAsWebInfResource(new File(WEB_INF_SRC, "web.xml"))
.addAsWebResource(new File(WEBAPP_SRC, "index.xhtml"))
.addAsWebResource(new File("src/main/webapp/demo", "home.xhtml"), "demo/home.xhtml")
.addAsResource("test-persistence.xml", "META-INF/persistence.xml")
.merge(ShrinkWrap.create(GenericArchive.class).as(ExplodedImporter.class)
.importDirectory(WEB_RESOURCES).as(GenericArchive.class), "resources")
.addAsLibraries(files);
System.out.println(war.toString(true));
return war;
}
#Drone
private WebDriver browser;
#ArquillianResource
private URL deploymentUrl;
#Test
#InSequence(1)
public final void browserTest() throws Exception {
browser.get(deploymentUrl.toExternalForm() + "index");
guardHttp(loginImage).click();
Assert.assertEquals("navigate to home page ", "https://127.0.0.1:8080/citi/demo/home", browser.getCurrentUrl());
}
#Test
#InSequence(2)
public final void homeManagedBean() throws Exception {
Warp
.initiate(new Activity() {
#Override
public void perform() {
WebElement txtMerchantEmailAddress = browser.findElement(By.id("txtMerchantEmailAddress"));
txtMerchantEmailAddress.sendKeys("demouser#yahoo.com");
guardAjax(btnMerchantSave).click();
}
})
.observe(request().header().containsHeader("faces-request"))
.inspect(new Inspection() {
private static final long serialVersionUID = 1L;
#Inject
HomeManagedBean hmb;
#ArquillianResource
FacesContext facesContext;
#BeforePhase(UPDATE_MODEL_VALUES)
public void initial_state_havent_changed_yet() {
Assert.assertEquals("email value ", "demouser#yahoo.com", hmb.getMerchantEmail());
}
#AfterPhase(UPDATE_MODEL_VALUES)
public void changed_input_value_has_been_applied() {
Assert.assertEquals(" email value ", "demouser#yahoo.com", hmb.getMerchantEmail());
}
});
}
}
the error i keep gettting is
org.jboss.arquillian.warp.impl.client.execution.WarpSynchronizationException: The Warp failed to observe requests or match them with response.
There were no requests matched by observer [containsHeader('faces-request')]
If Warp enriched a wrong request, use observe(...) method to select appropriate request which should be enriched instead.
Otherwise check the server-side log and enable Arquillian debugging mode on both, test and server VM by passing -Darquillian.debug=true.
at org.jboss.arquillian.warp.impl.client.execution.SynchronizationPoint.awaitResponses(SynchronizationPoint.java:155)
at org.jboss.arquillian.warp.impl.client.execution.DefaultExecutionSynchronizer.waitForResponse(DefaultExecutionSynchronizer.java:60)
at org.jboss.arquillian.warp.impl.client.execution.WarpExecutionObserver.awaitResponse(WarpExecutionObserver.java:64)
any help will be welcomed or an alternative way of validating a jsf viewscope bean during integration testing
I was able to sort out it was not working and able to create a sample project for future reference if anyone comes by the same problem
Testing using arquillian warp example
Related
Little background: I am working on a topology using Apache Storm, I thought why not use dependency injection in it, but I was not sure how it will behave on cluster environment when topology deployed to cluster. I started looking for answers on if DI is good option to use in Storm topologies, I came across some threads about Apache Spark where it was mentioned serialization is going to be problem and saw some responses for apache storm along the same lines. So finally I decided to write a sample topology with google guice to see what happens.
I wrote a sample topology with two bolts, and used google guice to injects dependencies. First bolt emits a tick tuple, then first bolt creates message, bolt prints the message on log and call some classes which does the same. Then this message is emitted to second bolt and same printing logic there as well.
First Bolt
public class FirstBolt extends BaseRichBolt {
private OutputCollector collector;
private static int count = 0;
private FirstInjectClass firstInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
firstInjectClass = injector.getInstance(FirstInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
count++;
String message = "Message count "+count;
firstInjectClass.printMessage(message);
log.error(message);
collector.emit("TO_SECOND_BOLT", new Values(message));
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declareStream("TO_SECOND_BOLT", new Fields("MESSAGE"));
}
#Override
public Map<String, Object> getComponentConfiguration() {
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 10);
return conf;
}
}
Second Bolt
public class SecondBolt extends BaseRichBolt {
private OutputCollector collector;
private SecondInjectClass secondInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
secondInjectClass = injector.getInstance(SecondInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
String message = (String) tuple.getValue(0);
secondInjectClass.printMessage(message);
log.error("SecondBolt {}",message);
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
}
Class in which dependencies are injected
public class FirstInjectClass {
FirstInterface firstInterface;
private final String prepend = "FirstInjectClass";
#Inject
public FirstInjectClass(FirstInterface firstInterface) {
this.firstInterface = firstInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
firstInterface.printMethod(message);
}
}
Interface used for binding
public interface FirstInterface {
void printMethod(String message);
}
Implementation of interface
public class FirstInterfaceImpl implements FirstInterface{
private final String prepend = "FirstInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Same way another class that receives dependency via DI
public class SecondInjectClass {
SecondInterface secondInterface;
private final String prepend = "SecondInjectClass";
#Inject
public SecondInjectClass(SecondInterface secondInterface) {
this.secondInterface = secondInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
secondInterface.printMethod(message);
}
}
another interface for binding
public interface SecondInterface {
void printMethod(String message);
}
implementation of second interface
public class SecondInterfaceImpl implements SecondInterface{
private final String prepend = "SecondInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Module Class
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(FirstInterface.class).to(FirstInterfaceImpl.class);
bind(SecondInterface.class).to(SecondInterfaceImpl.class);
}
}
Nothing fancy here, just two bolts and couple of classes for DI. I deployed it on server and it works just fine. The catch/problem though is that I have to initialize Injector in each bolt which makes me question what is side effect of it going to be?
This implementation is simple, just 2 bolts.. what if I have more bolts? what impact it would create on topology if I have to initialize Injector in all bolts?
If I try to initialize Injector outside prepare method I get error for serialization.
I have a Micronaut application that uses Micrometer to report metrics to InfluxDB with the micronaut-micrometer project. Currently it is using the Statsd Registry provided via the io.micronaut.configuration:micronaut-micrometer-registry-statsd dependency.
I would like to instead output metrics in Influx Line Protocol (ILP), but the micronaut-micrometer project does not offer an Influx Registry currently. I tried to work around this by importing the io.micrometer:micrometer-registry-influx dependency and configuring an InfluxMeterRegistry manually like this:
#Factory
public class MyMetricRegistryConfigurer implements MeterRegistryConfigurer {
#Bean
#Primary
#Singleton
public MeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
#Override
public boolean supports(MeterRegistry meterRegistry) {
return meterRegistry instanceof InfluxMeterRegistry;
}
}
When the application runs, the metrics are exposed on my /metrics endpoint as I would expect, but nothing gets written to InfluxDB. I confirmed that my local InfluxDB accepts metrics at the expected localhost:8086/write?db=metrics endpoint using curl. Can anyone give me some pointers to get this working? I'm wondering if I need to manually define a reporter somewhere...
After playing around for a bit, I got this working with the following code:
#Factory
public class InfluxMeterRegistryFactory {
#Bean
#Singleton
#Requires(property = MeterRegistryFactory.MICRONAUT_METRICS_ENABLED, value =
StringUtils.TRUE, defaultValue = StringUtils.TRUE)
#Requires(beans = CompositeMeterRegistry.class)
public InfluxMeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
}
I also noticed that an InfluxMeterRegistry will be available out of the box in the future for micronaut-micrometer as of v1.2.0.
I am trying to get the value of a property that is passed from a cloud function to a dataflow template. I am getting errors because the value being passed is a wrapper, and using the .get() method fails during the compile. with this error
An exception occurred while executing the Java class. null: InvocationTargetException: Not called from a runtime context.
public interface MyOptions extends DataflowPipelineOptions {
...
#Description("schema of csv file")
ValueProvider<String> getHeader();
void setHeader(ValueProvider<String> header);
...
}
public static void main(String[] args) throws IOException {
...
List<String> sideInputColumns = Arrays.asList(options.getHeader().get().split(","));
...
//ultimately use the getHeaders as side inputs
PCollection<String> input = p.apply(Create.of(sideInputColumns));
final PCollectionView<List<String>> finalColumnView = input.apply(View.asList());
}
How do I extract the value from the ValueProvider type?
The value of a ValueProvider is not available during pipeline construction. As such, you need to organize your pipeline so that it always has the same structure, and serializes the ValueProvider. At runtime, the individual transforms within your pipeline can inspect the value to determine how to operate.
Based on your example, you may need to do something like the following. It creates a single element, and then uses a DoFn that is evaluated at runtime to expand the headers:
public static class HeaderDoFn extends DoFn<String, String> {
private final ValueProvider<String> header;
public HeaderDoFn(ValueProvider<String> header) {
this.header = header;
}
#ProcessElement
public void processElement(ProcessContext c) {
// Ignore input element -- there should be exactly one
for (String column : this.header().get().split(",")) {
c.output(column);
}
}
}
public static void main(String[] args) throws IOException {
PCollection<String> input = p
.apply(Create.of("one")) // create a single element
.apply(ParDo.of(new DoFn<String, String>() {
#ProcessElement
public void processElement(ProcessContext c) {
}
});
// Note that the order of this list is not guaranteed.
final PCollectionView<List<String>> finalColumnView =
input.apply(View.asList());
}
Another option would be to use a NestedValueProvider to create a ValueProvider<List<String>> from the option, and pass that ValueProvider<List<String>> to the necessary DoFns rather than using a side input.
Below is our program, we create multiple containers for different queue through property in application.properties. But now it is static, when add another property, we must change the code.
I want add containers dynamically. I investigate several solutions.
1.use BeanFactory.registerSingleton method, but it cannot receive lifecycle callback,so i'm not sure the container can shutdown gracefully.
2.use BeanFactoryPostRegistor, but it need build a BeanDefinition, i have no idea how can construct a BeanDefinition for SimpleMessageListenerContainer, because it will be created by SimpleMessageListenrContainerFactory.
Can anybody give me better solution both add beans dynamically and the SimpleMessageListenerContainer can be started and shutdown normally?
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.kvm")
public SimpleMessageListenerContainer kvmReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.kvm"));
}
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.vmware")
public SimpleMessageListenerContainer vmwareReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.vmware"));
}
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.powervc")
public SimpleMessageListenerContainer powervcReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.powervc"));
}
#Autowired
private SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory;
private SimpleMessageListenerContainer getSimpleMessageListenerContainer(String queueName){
return simpleRabbitListenerContainerFactory.createContainerInstance();
}
Take all properties you need (for example by regexp) and then register beans you want. There are 2 separate task there (1) how to get Spring properties (2) how register bean dynamically
1) To iterate over properties in 'Spring way':
#Autowired
Properties props;
....
for(Entry<Object, Object> e : props.entrySet()) {
if( /*some code to match*/ ){
//dispatch bean creation
}
}
2) you can either create beans dynamically by:
public MyClassPostRegister implements BeanFactoryPostProcessor {
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
//create bean definition:
GenericBeanDefinition beanDefinition = new GenericBeanDefinition();
beanDefinition.setBeanClass(MyBeanClass.class);
beanDefinition.setLazyInit(false);
beanDefinition.setAbstract(false);
beanDefinition.setAutowireCandidate(true);
beanDefinition.setScope("prototype");
beanFactory.registerBeanDefinition("dynamicBean",beanDefinition);
}
Appendix after comment #GrapeBaBa:
Actually I use simpleRabbitListenerContainerFactory.createContainerInstance() to create container, so how to transform to use beanDefinition - please pay attention to lines marked with (!!!)
Create you own component
#Component
public class MyClassPostRegister implements BeanFactoryPostProcessor {
#Autowired
Properties props; //this gives you access to all properties
//following is example of filter by name
static final Pattern myInterestingProperties =
Pattern.compile("pmc\\.multiple\\.hypervisor\\.reply\\.routerkey\\..+");
Add post-process handler:
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
//iterate through properties
for(Entry<Object, Object> e : props.entrySet()) {
Matcher m = myInterestingProperties.matcher(e.key);
if( !m.matches() )
continue;
//create bean definition:
GenericBeanDefinition beanDefinition = new GenericBeanDefinition();
beanDefinition.setBeanClass(SimpleMessageListenerContainer.class);
beanDefinition.setLazyInit(false);
beanDefinition.setAbstract(false);
beanDefinition.setAutowireCandidate(true);
beanDefinition.setScope("prototype");
//!!! Now specify name of factory method
beanDefinition.setFactoryMethodName("getSimpleMessageListenerContainer");
//!!! Now specify factory arguments:
ConstructorArgumentValues v = new ConstructorArgumentValues();
v.addGenericArgumentValue( e.getKey() ); //string
beanDefinition.getConstructorArgumentValues().add( v );
beanFactory.registerBeanDefinition("dynamicBean",beanDefinition);
}
}
I have a problem about the mbeans. I have created a simple mbean and I have registered it on the default mBeanServer that is run (Via eclipse or java -jar mbean.jar) and in the same process if I try to fouund the mbean registered with a simple query:
for (ObjectInstance instance : mbs.queryMBeans(ObjectNameMbean, null)) {
System.out.println(instance.toString());
}
the query retuerns my mbean, but if I start another process and try to search this mbean registered the mbeas is not found! why?
The approch is : (Process that is running)
public static void main(String[] args) throws Exception
{
MBeanServer mbeanServer =ManagementFactory.getPlatformMBeanServer();
ObjectName objectName = new ObjectName(ObjectNameMbean);
Simple simple = new Simple (1, 0);
mbeanServer.registerMBean(simple, objectName);
while (true)
{
wait (Is this necessary?)
}
}
So this is the first process that is running (that has the only pourpose to registry the mbean, because there is another process that want to read these informations.
So I start another process to search this mbean but nothing.
I 'm not using jboss but the local Java virtual Machine but my scope is to deploy this simple application in one ejb (autostart) and another ejb will read all informations.
All suggestions are really apprecciated.
This example should be more useful :
Object Hello:
public class Hello implements HelloMBean {
public void sayHello() {
System.out.println("hello, world");
}
public int add(int x, int y) {
return x + y;
}
public String getName() {
return this.name;
}
public int getCacheSize() {
return this.cacheSize;
}
public synchronized void setCacheSize(int size) {
this.cacheSize = size;
System.out.println("Cache size now " + this.cacheSize);
}
private final String name = "Reginald";
private int cacheSize = DEFAULT_CACHE_SIZE;
private static final int DEFAULT_CACHE_SIZE = 200;
}
Interface HelloBean (implemented by Hello)
public interface HelloMBean {
public void sayHello();
public int add(int x, int y);
public String getName();
public int getCacheSize();
public void setCacheSize(int size);
}
Simple Main
import java.lang.management.ManagementFactory;
import java.util.logging.Logger;
import javax.management.MBeanServer;
import javax.management.ObjectName;
public class Main {
static Logger aLog = Logger.getLogger("MBeanTest");
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
Hello mbean = new Hello();
mbs.registerMBean(mbean, name);
// System.out.println(mbs.getAttribute(name, "Name"));
aLog.info("Waiting forever...");
Thread.sleep(Long.MAX_VALUE);
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
}
So now I have exported this project as jar file and run it as "java -jar helloBean.jar" and by eclipse I have modified the main class to read informations of this read (Example "Name" attribute) by using the same objectname used to registry it .
Main to read :
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
System.out.println(mbs.getAttribute(name, "Name"));
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
But nothing, the bean is not found.
Project link : here!
Any idea?
I suspect the issue here is that you have multiple MBeanServer instances. You did not mention how you acquired the MBeanServer in each case, but in your second code sample, you are creating a new MBeanServer instance which may not be the same instance that other threads are reading from. (I assume this is all in one JVM...)
If you are using the platform agent, I recommend you acquire the MBeanServer using the ManagementFactory as follows:
MBeanServer mbs = java.lang.management.ManagementFactory.getPlatformMBeanServer() ;
That way, you will always get the same MBeanServer instance.