Am new to QAF and I need to implement self-healing in our test method using healenium. I have implemented it without QAF it's working fine. Please refer to the below code.
import com.epam.healenium.SelfHealingDriver;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.openqa.selenium.Dimension;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import java.util.concurrent.TimeUnit;
public class BaseTest {
static protected SelfHealingDriver driver;
#BeforeAll
static public void setUp() {
WebDriverManager.chromedriver().setup();
ChromeOptions options = new ChromeOptions();
options.setHeadless(false);
//declare delegate
WebDriver delegate = new ChromeDriver(options);
driver = SelfHealingDriver.create(delegate);
driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);
driver.manage().window().setSize(new Dimension(1200, 800));
}
#AfterAll
static public void afterAll() {
if (driver != null) {
driver.quit();
}
}
}
I just want to wrap this self-healing driver with a QAF web driver like above.
QAF discourage to use static class variable for driver. The code provided in question will not work for parallel execution. Driver management is taken care by qaf with thread safety and with different behavior that can be set using property selenium.singletone.
You can try following way when you want SelfHealingDriver:
public class SampleTest extends WebDriverTestCase {
#Test
public void yourTestCase(){
SelfHealingDriver driver = SelfHealingDriver.create(getDriver());
//your code goes below
}
}
SelfHealingDriver proxies actual driver. You can achieve the self heal functionality without driver proxy with listener for findelement/findChildelement. Driver listener should work without proxing driver. For example:
public class WDListener extends QAFWebDriverCommandAdapter {
private static final Map<String, Object> byToString = JSONUtil.toMap(
"{'ByCssSelector':'css selector','ByClassName':'class name','ByXPath':'xpath','ByPartialLinkText':'partial link text','ById':'id','ByLinkText':'link text','ByName':'name'}");
//this method will called when new driver object created
public void onInitialize(QAFExtendedWebDriver driver){
driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);
driver.manage().window().setSize(new Dimension(1200, 800));
}
#Override
public void afterCommand(QAFExtendedWebDriver driver, CommandTracker commandTracker) {
if (DriverCommand.FIND_ELEMENT.equalsIgnoreCase(commandTracker.getCommand())
|| DriverCommand.FIND_ELEMENTS.equalsIgnoreCase(commandTracker.getCommand())
|| DriverCommand.FIND_CHILD_ELEMENT.equalsIgnoreCase(commandTracker.getCommand())
|| DriverCommand.FIND_CHILD_ELEMENTS.equalsIgnoreCase(commandTracker.getCommand())) {
Map<String, Object> parameters = commandTracker.getParameters();
if (parameters != null && parameters.containsKey("using") && parameters.containsKey("value")) {
By by = LocatorUtil
.getBy(String.format("%s=%s", parameters.get("using"), parameters.get("value")));
HealingServiceImpl healingServiceImpl = new HealingServiceImpl(new SelfHealingEngine(driver));
StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
Object result = commandTracker.getResponce().getValue();
List<WebElement> webElements = List.class.isAssignableFrom(result.getClass())?(List<WebElement>) result:Collections.singletonList((WebElement)result)
healingServiceImpl.savePath(new PageAwareBy(driver.getTitle(),by),webElements);
}
}
}
#Override
public void onFailure(QAFExtendedWebDriver driver, CommandTracker commandTracker) {
// StackTraceElement[] stackTrace =
// commandTracker.getException().getStackTrace();
Map<String, Object> parameters = commandTracker.getParameters();
if (parameters != null && parameters.containsKey("using") && parameters.containsKey("value")) {
By by = LocatorUtil
.getBy(String.format("%s=%s", parameters.get("using"), parameters.get("value")));
StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
HealingServiceImpl healingServiceImpl = new HealingServiceImpl(new SelfHealingEngine(driver));
Optional<By> healedBy = healingServiceImpl.healLocators(new PageAwareBy(driver.getTitle(),by), null, stackTrace) ;
if(healedBy.isPresent()) {
commandTracker.getParameters().putAll(toParams(healedBy.get()));
commandTracker.setRetry(true);
}
}
}
}
Related
Little background: I am working on a topology using Apache Storm, I thought why not use dependency injection in it, but I was not sure how it will behave on cluster environment when topology deployed to cluster. I started looking for answers on if DI is good option to use in Storm topologies, I came across some threads about Apache Spark where it was mentioned serialization is going to be problem and saw some responses for apache storm along the same lines. So finally I decided to write a sample topology with google guice to see what happens.
I wrote a sample topology with two bolts, and used google guice to injects dependencies. First bolt emits a tick tuple, then first bolt creates message, bolt prints the message on log and call some classes which does the same. Then this message is emitted to second bolt and same printing logic there as well.
First Bolt
public class FirstBolt extends BaseRichBolt {
private OutputCollector collector;
private static int count = 0;
private FirstInjectClass firstInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
firstInjectClass = injector.getInstance(FirstInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
count++;
String message = "Message count "+count;
firstInjectClass.printMessage(message);
log.error(message);
collector.emit("TO_SECOND_BOLT", new Values(message));
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declareStream("TO_SECOND_BOLT", new Fields("MESSAGE"));
}
#Override
public Map<String, Object> getComponentConfiguration() {
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 10);
return conf;
}
}
Second Bolt
public class SecondBolt extends BaseRichBolt {
private OutputCollector collector;
private SecondInjectClass secondInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
secondInjectClass = injector.getInstance(SecondInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
String message = (String) tuple.getValue(0);
secondInjectClass.printMessage(message);
log.error("SecondBolt {}",message);
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
}
Class in which dependencies are injected
public class FirstInjectClass {
FirstInterface firstInterface;
private final String prepend = "FirstInjectClass";
#Inject
public FirstInjectClass(FirstInterface firstInterface) {
this.firstInterface = firstInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
firstInterface.printMethod(message);
}
}
Interface used for binding
public interface FirstInterface {
void printMethod(String message);
}
Implementation of interface
public class FirstInterfaceImpl implements FirstInterface{
private final String prepend = "FirstInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Same way another class that receives dependency via DI
public class SecondInjectClass {
SecondInterface secondInterface;
private final String prepend = "SecondInjectClass";
#Inject
public SecondInjectClass(SecondInterface secondInterface) {
this.secondInterface = secondInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
secondInterface.printMethod(message);
}
}
another interface for binding
public interface SecondInterface {
void printMethod(String message);
}
implementation of second interface
public class SecondInterfaceImpl implements SecondInterface{
private final String prepend = "SecondInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Module Class
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(FirstInterface.class).to(FirstInterfaceImpl.class);
bind(SecondInterface.class).to(SecondInterfaceImpl.class);
}
}
Nothing fancy here, just two bolts and couple of classes for DI. I deployed it on server and it works just fine. The catch/problem though is that I have to initialize Injector in each bolt which makes me question what is side effect of it going to be?
This implementation is simple, just 2 bolts.. what if I have more bolts? what impact it would create on topology if I have to initialize Injector in all bolts?
If I try to initialize Injector outside prepare method I get error for serialization.
I have a Micronaut application that uses Micrometer to report metrics to InfluxDB with the micronaut-micrometer project. Currently it is using the Statsd Registry provided via the io.micronaut.configuration:micronaut-micrometer-registry-statsd dependency.
I would like to instead output metrics in Influx Line Protocol (ILP), but the micronaut-micrometer project does not offer an Influx Registry currently. I tried to work around this by importing the io.micrometer:micrometer-registry-influx dependency and configuring an InfluxMeterRegistry manually like this:
#Factory
public class MyMetricRegistryConfigurer implements MeterRegistryConfigurer {
#Bean
#Primary
#Singleton
public MeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
#Override
public boolean supports(MeterRegistry meterRegistry) {
return meterRegistry instanceof InfluxMeterRegistry;
}
}
When the application runs, the metrics are exposed on my /metrics endpoint as I would expect, but nothing gets written to InfluxDB. I confirmed that my local InfluxDB accepts metrics at the expected localhost:8086/write?db=metrics endpoint using curl. Can anyone give me some pointers to get this working? I'm wondering if I need to manually define a reporter somewhere...
After playing around for a bit, I got this working with the following code:
#Factory
public class InfluxMeterRegistryFactory {
#Bean
#Singleton
#Requires(property = MeterRegistryFactory.MICRONAUT_METRICS_ENABLED, value =
StringUtils.TRUE, defaultValue = StringUtils.TRUE)
#Requires(beans = CompositeMeterRegistry.class)
public InfluxMeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
}
I also noticed that an InfluxMeterRegistry will be available out of the box in the future for micronaut-micrometer as of v1.2.0.
I have a Firebase admin helper class that I am testing with Spock. The constructor of this class will call another method in the class to initialize certain fields if it has to, as shown below:
public class FirebaseUtility {
private static FirebaseDatabase db = null;
public FirebaseUtility() throws IOException {
if (db == null) {
initializeFirebase();
}
}
public void initializeFirebase() throws IOException {
InputStream serviceAccount = ClassLoader.getSystemResourceAsStream("serviceAccount.json");
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("<my_database_url>").build();
FirebaseApp.initializeApp(options);
db = FirebaseDatabase.getInstance();
}
}
Basically, there is no point in doing all the initialization code if the FirebaseDatabase is already set.
I have tried doing this, but it does not seem to work:
class FirebaseUtilitySpec extends Specification {
def "instantiating FirebaseUtility should run initialization code"() {
given:
def f
when:
f = new FirebaseUtility()
then:
1 * f.initializeFirebase()
}
}
First of all, you cannot check interactions on original objects, you need to use a mock or spy. Furthermore, those types of objects cannot intercept interactions on static methods or constructors. For that you would have to add Mockito or even PowerMock to the mix. But basically, static methods are just ugly anyway and initialising a static member in a constructor call is not necessary. Just use a lazy getter for the database object and intercept its behaviour.
I have simplified your example a bit, removing the external dependency and just emulating Firebase so as to make it easier to answer with an MCVE:
package de.scrum_master.stackoverflow;
public class FirebaseDatabase {
private static FirebaseDatabase instance;
public static FirebaseDatabase getInstance() {
if (instance == null)
instance = new FirebaseDatabase();
return instance;
}
}
package de.scrum_master.stackoverflow;
public class FirebaseUtility {
private static FirebaseDatabase db = null;
public FirebaseDatabase getDb() {
if (db == null)
initializeFirebase();
return db;
}
protected void initializeFirebase() {
db = FirebaseDatabase.getInstance();
}
}
package de.scrum_master.stackoverflow
import spock.lang.Specification
class FirebaseUtilitySpec extends Specification {
def "instantiating FirebaseUtility runs initialization code exactly once"() {
given:
FirebaseUtility f = Spy()
when:
f.getDb()
then:
1 * f.initializeFirebase()
when:
f.getDb()
then:
0 * f.initializeFirebase()
}
}
Is it the correct way to create a in memory db using neo4j ? So that traverse query will hit only cache and not the disk .
approach - 1 : I tried with this :
package com.test;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.kernel.impl.util.FileUtils;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
public class CreateDBFactory {
private static GraphDatabaseService graphDb = null;
public static final String DB = "test/db";
public static GraphDatabaseService createInMemoryDB() {
System.out.println("- Inside createInMemoryDB() - ");
if (null == graphDb) {
synchronized (GraphDatabaseService.class) {
if (null == graphDb) {
System.out.println(" - Inside if clause -");
final Map<String, String> config = new HashMap<>();
config.put("neostore.nodestore.db.mapped_memory", "50M");
config.put("string_block_size", "60");
config.put("array_block_size", "300");
graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(DB).setConfig(config)
.newGraphDatabase();
registerShutdownHook(graphDb);
}
}
}
return graphDb;
}
private static void registerShutdownHook(final GraphDatabaseService graphDb) {
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
graphDb.shutdown();
}
});
}
public static void clearDb() {
try {
if (graphDb != null) {
graphDb.shutdown();
graphDb = null;
}
FileUtils.deleteRecursively(new File(DB));
} catch (final IOException e) {
throw new RuntimeException(e);
}
}
}
Approach -2 : Using Neo4jBasicDocTest class.
Here new ImpermanentDatabaseBuilder() is not creating the target/test-data/impermanent-db folder. So not able to test " Nancy" node is created or not .
Neo4j doesn't have an 'in memory' mode in the sense that all data always stored in memory and no disk storage is used. ImpermanentGraphDatabase is the closest you'll find to anything like that, but that just creates a data directory at random and will delete it when it is shutdown.
If you are okay with using disk, you can use the above ImpermanentGraphDatabase, and just set the cache of neo4j to be really high. This will make everything be stored in memory, as well as on disk.
I read Blackberry - How to get the background application process id but I'm not sure I understand it correctly. The following code gets the foreground process id;
ApplicationManager.getApplicationManager().getForegroundProcessId()
I have two processes which execute the same piece of code to make a connection, I want to log the process which made the calls along with all my usual logging data to get a better idea of how the flow is working.
Is it possible to get the id for the process which is currently running the code? One process is in the foreground (UI process) and the other is in the background but both use the same connection library shared via the runtime store.
Thanks in advance!
Gav
So you have three modules: application, library and service.
You need to get descriptor by module name, and then get process id.
UPDATE1
String moduleName = "application";
int handle = CodeModuleManager.getModuleHandle(moduleName);
ApplicationDescriptor[] descriptors = CodeModuleManager
.getApplicationDescriptors(handle);
if (descriptors.length > 0 && descriptors[0] != null) {
ApplicationManager.getApplicationManager().getProcessId(descriptors[0]);
}
Then, to log which module uses library, use
Application.getApplication().getProcessId();
inside library methods. I think its better to implement logging inside library.
When you got process id of application from library code, you can compare it with id's found by module name and then you will know what module uses library code.
UPDATE2
alt text http://img138.imageshack.us/img138/23/eventlog.jpg
library module code:
package library;
import net.rim.device.api.system.Application;
import net.rim.device.api.system.ApplicationDescriptor;
import net.rim.device.api.system.ApplicationManager;
import net.rim.device.api.system.CodeModuleManager;
import net.rim.device.api.system.EventLogger;
public class Logger {
// "AppLibSrvc" converted to long
long guid = 0xd4b6b5eeea339daL;
public Logger() {
EventLogger.register(guid, "AppLibSrvc", EventLogger.VIEWER_STRING);
}
public void log(String message) {
EventLogger.logEvent(guid, message.getBytes());
}
public void call() {
log("Library is used by " + getModuleName());
}
private String getModuleName() {
String moduleName = "";
String appModuleName = "application";
int appProcessId = getProcessIdByName(appModuleName);
String srvcModuleName = "service";
int srvcProcessId = getProcessIdByName(srvcModuleName);
int processId = Application.getApplication().getProcessId();
if (appProcessId == processId)
moduleName = appModuleName;
else if (srvcProcessId == processId)
moduleName = srvcModuleName;
return moduleName;
}
protected int getProcessIdByName(String moduleName) {
int processId = -1;
int handle = CodeModuleManager.getModuleHandle(moduleName);
ApplicationDescriptor[] descriptors = CodeModuleManager
.getApplicationDescriptors(handle);
if (descriptors.length > 0 && descriptors[0] != null) {
processId = ApplicationManager.getApplicationManager()
.getProcessId(descriptors[0]);
}
return processId;
}
}
application module code:
package application;
import java.util.Timer;
import java.util.TimerTask;
import library.Logger;
import net.rim.device.api.ui.UiApplication;
import net.rim.device.api.ui.container.MainScreen;
public class App extends UiApplication {
public App() {
pushScreen(new Scr());
}
public static void main(String[] args) {
App app = new App();
app.enterEventDispatcher();
}
}
class Scr extends MainScreen {
public Scr() {
Timer timer = new Timer();
TimerTask task = new TimerTask() {
public void run() {
Logger logger = new Logger();
logger.call();
}
};
timer.schedule(task, 3000, 3000);
}
}
service module code:
package service;
import java.util.Timer;
import java.util.TimerTask;
import library.Logger;
import net.rim.device.api.system.Application;
public class App extends Application {
public App() {
Timer timer = new Timer();
TimerTask task = new TimerTask() {
public void run() {
Logger logger = new Logger();
logger.call();
}
};
timer.schedule(task, 3000, 3000);
}
public static void main(String[] args) {
App app = new App();
app.enterEventDispatcher();
}
}