Automation testing framework for Jetpack compose - android-jetpack-compose

we have re-written couple of features in Jetpack Compose successfully. we have hit a roadblock where our QA says the existing automation script they have written does not work anymore for compose UI screens.
Background abt the automation script:
QA uses Appium script which uses UIAutomator2 to automate the elements. For identifying locator(ID) - appium inspector is used.
We don't have ID's in compose UI.
We tried adding testTag and not seeing it in appium inspector.
Pls share what kind of framework changes you have to do for Automation script to support compose UI.
Thanks

Unfortunately, Appium UIAutomator2 does not support the property testTag yet.
There is an issue already created on Apppium's repository requesting this property.

Fellas, I just managed to access Compose elements by simply adding property contentDescription = "UseThisInstead" in Android Studio
later on i could access the element with Appium/ UIAutomator2 by xpath
driver.findElement(By.xpath("//*[#content-desc='UseThisInstead']")).isDisplayed();
try that

UPDATE
According to compose official docs and Interoperability with UiAutomator (since Compose version 1.3.3):
The testTagAsResourceId can be enabled for the particular composables subtree in your composables hierarchy to ensure all of nested composables with Modifier.testTag are accessible from UiAutomator.
In Compose:
Scaffold(
// Enables for all composables in the hierarchy.
modifier = Modifier.semantics {
testTagsAsResourceId = true
}
){
// Modifier.testTag is accessible from UiAutomator for composables nested here.
LazyColumn(
modifier = Modifier.testTag("myLazyColumn")
){
// content
}
}
In Tests:
val device = UiDevice.getInstance(getInstrumentation())
val lazyColumn: UiObject2 = device.findObject(By.res("myLazyColumn"))
// some interaction with the lazyColumn

Related

No such property: ToInputStream for class: Script4

I have a situation where I want to import my graph data to database.I am having janusgraph(latest version) running with cassandra(version 3) and elasticsearch(version 6.6.0) using Docker.I have been suggested to use gryo format.So I have tried this command
graph.io(IoCore.gryo()).reader().create().readGraph(ToInputStream.from("my_graph.kryo"), graph);
but ended up with an error
No such property: ToInputStream for class: Script4
The documentation I am following is here.Please take a look and put me in a right procedure. Thanks in advance!
ToInputStream is not a function of Gremlin or JanusGraph. I believe that it is only a function of IBM Compose so unless you are running JanusGraph on that specific platform, this command will not work.
Versions of JanusGraph that utilize TinkerPop 3.4.x will support the io() step and this is the preferred manner in which to load gryo (as well as graphson and graphml) files.
Graph graph = ... // setup JanusGraph instance
GraphTraversalSource g = traversal().withGraph(graph); // might use withRemote() here instead depending on how you are connecting I suppose
g.io("graph.kryo").read().iterate()
Note that if you are connecting remotely - it seems you are sending scripts to the Docker instance given your error - then be sure that that "graph.kryo" file path is accessible to Docker. That's what's nice about ToInputStream from Compose as it allows you to access remote sources.

Uptodate list of running docker containers stated in an exported golang variable

I am trying to use the Golang SDK of Docker in order to maintain a slice variable with currently running containers on the local Docker instance. This slice is exported from a package and I want to use it to feed a web page.
I am not really used to goroutines and channels and that's why I am wondering if I have spotted a good solution for my problem.
I have a docker package as follows.
https://play.golang.org/p/eMmqkMezXZn
It has a Running variable containing the current state of running containers.
var Running []types.Container
I use a reload function to load the running containers in the Running variable.
// Reload the list of running containers
func reload() error {
...
Running, err = cli.ContainerList(context.Background(), types.ContainerListOptions{
All: false,
})
...
}
And then I start a goroutine from the init function to listen to Docker events and trigger the reload function accordingly.
func init() {
...
// Listen for docker events
go listen()
...
}
// Listen for docker events
func listen() {
filter := filters.NewArgs()
filter.Add("type", "container")
filter.Add("event", "start")
filter.Add("event", "die")
msg, errChan := cli.Events(context.Background(), types.EventsOptions{
Filters: filter,
})
for {
select {
case err := <-errChan:
panic(err)
case <-msg:
fmt.Println("reloading")
reload()
}
}
}
My question is, is it proper to update a variable from inside a goroutine (in terms of sync)? Maybe there is a cleaner way to achieve what I am trying to build?
Update
My concern here is not really about caching. It is more about hiding the "complexity" of the process of listening and update from the Docker SDK. I wanted to provide something like an index to easily let the end user loop and display currently running containers.
I was aware of data-races problems in threaded programs but I did not realize I was as actually in a context of concurrence here (I never wrote concurrent programs in Go before).
I effectively need to re-think the solution to be more idiomatic. As far as I can see, I have two options here: either protecting the variable with a mutex or re-thinking the design to integrate channels.
What means the most to me is to hide or encapsulate the method of synchronization used so the package users need not concern of how the shared state is protected.
Would you have any recommendations?
Thanks a lot for your help,
Loric
No, it is not idiomatic Go to share the Running variable between two goroutines. You do this by sharing it between the routine that runs your main function, and the listen function which is started with go—which spawns another goroutine.
Why, is because it breaks with
Do not communicate by sharing memory; instead, share memory by
communicating. ¹
So the design of the API needs to change in order to be idiomatic; you need to remove the Running variable and replace it with what? It depends on what you are trying to achieve. If you are trying to cache the cli.ContainerList because you need to call it often, and it might be expensive, you should implement a cache which is invalidated on each cli.Events.
What is your motivation?

How are webdriver and chromedriver options accessed in Spectron's new Application()?

When I start a new Spectron application (for each test suite) I would like to fix the size of the app (for consistency across all machines and reloads).
Commands like setBounds and maximise do change dimensions, but they only do so after the app has started (which means that some components have already assumed certain dimensions, which then changes test results).
In the Spectron docs various launch options are available. I've tired to use webdriver and chromedriver options but they don't seem to work. Here is an example:
app = new Application({
path: kElectronPath,
webdriverOptions: {
width:1368,
height:769,
},
I just assumed that the webdriver options came from the browserwindow class. How are webdriver and chromedriver options accessed in Spectron?
This should help
this.app = new Application({
path: './ac.exe',
args: ['app'],
webdriverOptions: ({deprecationWarnings : false})
});
pass the parameters that need to be changed during app launch
and have your new application config in before hook

Application insights 2.2.1 in .net core 2.0 - turn off output to debug

This is the same question as this but for the recent version of application insights 2.2.1
Since updating to the 2.2 version the debug output is filled with AI data even if it is disabled the way it used to be done.
Previously AI was enabled in startup and I could do something like this:
services.AddApplicationInsightsTelemetry(options =>
{
options.EnableDebugLogger = false;
options.InstrumentationKey = new ConnectionStringGenerator().GetAITelemetryKey();
});
The new method of adding application insights, per the new VS templates, is to add it in Program.cs like this:
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseApplicationInsights(connectionStringGenerator.GetAITelemetryKey())
.UseSerilog()
.Build();
In this case there is no construction that takes any options and if I remove the 'UseApplicationInsights' and revert to the original method it makes no difference. Either way I get the output debug window filled wit AI logs.
In fact, even if there is no method to load AI (i.e. I remove both the 'UseApplicationInsights' and 'AddApplicationInsightsTelemetry' I get the logs.
Thanks for any help.
You can opt out of telemetry (for debug, for example) by setting a DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1.
Visual Studio is lighting up Application Insights even if you have no code to enable it. You can create an environment variable, ASPNETCORE_PREVENTHOSTINGSTARTUP = True, to prevent Visual Studio from lighting up Application Insights.
How to do this?
Right click the project in VS, Select Properties.In Debug options add environment variable as shown in below screenshot.

Select a different runner for cucumber.api.cli.Main?

Is it possible to define/specify a runner when starting tests from cucumber's command line(cucumber.api.cli.Main)?
My reason for this is so i can generate xml reports in Jenkins and push the results to ALM Octane.
I kind of inherited this project and its using gradle to do a javaexect and call cucumber.api.cli.Main
I know its possible to do this with #RunWith(OctaneCucumber.class) when using JUnit runner + maven (or only JUnit runner), otherwise that tag is ignored. I have the custom runner with that tag but when i run from cucumber.api.cli.Main i can't find a way to run with it and my tag just gets ignored.
What #Grasshopper suggested didn't exactly work but it made me look in the right direction.
Instead of adding the code as a plugin, i managed to "hack/load" the octane reporter by creating a copy of the cucumber.api.cli.Main, using it as a base to run the cli commands and change a bit the run method and add the plugin at runtime. Needed to do this because the plugin required quite a few parameters in its constructor. Might not be the perfect solution, but it allowed me to keep the gradle build process i initially had.
public static byte run(String[] argv, ClassLoader classLoader) throws IOException {
RuntimeOptions runtimeOptions = new RuntimeOptions(new ArrayList<String>(asList(argv)));
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
//====================Added the following lines ================
//Hardcoded runner(?) class. If its changed, it will need to be changed here also
OutputFile outputFile = new OutputFile(Main.class);
runtimeOptions.addPlugin(new HPEAlmOctaneGherkinFormatter(resourceLoader, runtimeOptions.getFeaturePaths(), outputFile));
//==============================================================
runtime.run();
return runtime.exitStatus();
}

Resources