I'm building android framework using Appium, Serenity and POM model.
I wanna take screenshots if any of the steps failed.
Can anybody help me with code and please let me know where to put that?
Eg I have Pages, steps and stepdefinition class.
Not sure where to implement that?
I tried to compare pictures with template by OpenCV library. That's what I do:
Add method to base_page.py.
def compare_image_with_screenshot(self, image_name: str):
os.chdir('../src/screenshots/')
with open(f'{image_name}.png', 'rb') as img:
first_image = base64.b64encode(img.read()).decode('ascii')
second_image = base64.b64encode(self._driver.get_screenshot_as_png()).decode('ascii')
return self._driver.get_images_similarity(first_image, second_image)
Use this method in Page Object file.
#allure.step('Compare screenshot with template')
def get_image_comparison_percents(self):
"""
This method gets screenshot on device with template in repo. Comparison result is percentage of similarity. Test is OK if comparison more than 90%
"""
result = self.compare_image_with_screenshot(OfflineLocators.offline_stub)
return result.get('score')
Use step in necessary test.
#allure.link(url='https://jira.myproject.tech/browse/TEST-1', name='TEST-1 - Offline stub')
#allure.title('Offline stub')
def test_offline_stub(appdriver):
TourActions(appdriver).skip_tour()
Navigation(appdriver).open_my_offline_page()
assert Offline(appdriver).get_page_title_text() == 'Offline'
assert Offline(appdriver).get_image_comparison_percents() > 0.9
In result of all of this I get some percentage of pictures similarity. It can be with that percentage what you need. For my tests it is OK.
If you mean screenshots for the test results, just let me know and I can show you example how I did it with Allure.
If you mean usual screenshots in Appium, please provide some mistakes which you catch.
Below is example to get video if test is failed.
conftest.py (in the root directory)
#pytest.fixture
def appdriver():
driver = config.get_driver_caps()
if config.IS_IOS:
driver.start_recording_screen(videoQuality='high', videoType='mpeg4', videoFps='24')
else:
driver.start_recording_screen()
yield driver
attach_device_log(driver)
save_screenshot(driver)
driver.quit()
Save screenshot method
def save_screenshot(appdriver):
allure.attach(
appdriver.get_screenshot_as_png(),
name='screenshot',
attachment_type=allure.attachment_type.PNG
)
Related
I am testing Luigi's abilities for running tasks in a Docker container, i.e. I would like Luigi to spawn a container from a given Docker image and execute a task therein.
As far as I understood, there is luigi.contrib.docker_runner.DockerTask for this purpose. I tried to modify its command and add an output:
import luigi
from luigi.contrib.docker_runner import DockerTask
class Task(DockerTask):
def output(self):
return luigi.LocalTarget("bla.txt")
def command(self):
return f"touch {self.output()}"
if __name__ == "__main__":
luigi.build([Task()], workers=2, local_scheduler=True)
But I am getting
It seems that there is a TypeError in docker. Is my use of DockerTask erroneous? Unfortunately, I cannot find any examples for use cases...
Here the answer from my reply to your email:
luigi.LocalTarget doesn't return a file path or string, but a target object, therefore f"touch {self.output()}" isn't a valid shell command.
Second, I just looked into the documentation of DockerTask.
https://luigi.readthedocs.io/en/stable/_modules/luigi/contrib/docker_runner.html#DockerTask
command is should be a property, not a method. This explains your error message, the code calls "Task().command", which is a function, and not a string. A solution might be to use a static attribute for the output name
class Task(DockerTask):
output_name = "bla.txt"
def output(self):
return luigi.LocalTarget(self.output_name)
command = f"touch {self.output_name}"
or to use a property decorator:
class Task(DockerTask):
def output(self):
return luigi.LocalTarget("bla.txt")
#property
def command(self):
target = self.output() # not a string/fpath
return f"touch {target.path}"
though I'm not sure whether this would work, I'd prefer the first option.
In the future, make sure to check the documentation for the base class that you're implementing, e.g. checking if an attribute is a method or property, maybe use a debugger to understand what's happening.
And for the sake of searchability and saving data and bandwidth, in the future try to post error logs not as screenshots but in text, either as a snippet or via a link to a pastebin or something like that.
I've been following along with the testdriven.io tutorial for setting up a FastAPI with Docker. The first test I've written using PyTest errored out with the following message:
TypeError: Settings(environment='dev', testing=True, database_url=AnyUrl('postgres://postgres:postgres#web-db:5432/web_test', scheme='postgres', user='*****', password='*****', host='web-db',host_type='int_domain', port='5432', path='/web_test')) is not a callable object.
Looking at the picture, you'll notice that the Settings object has a strange form; in particular, its database_url parameter seems to be wrapping a bunch of other parameters like password, port, and path. However, as shown below my Settings class takes a different form.
From config.py:
# ...imports
class Settings(BaseSettings):
environment: str = os.getenv("ENVIRONMENT", "dev")
testing: bool = os.getenv("TESTING", 0)
database_url: AnyUrl = os.environ.get("DATABASE_URL")
#lru_cache()
def get_settings() -> BaseSettings:
log.info("Loading config settings from the environment...")
return Settings()
Then, in the conftest.py module, I've overridden the settings above with the following:
import os
import pytest
from fastapi.testclient import TestClient
from app.main import create_application
from app.config import get_settings, Settings
def get_settings_override():
return Settings(testing=1, database_url=os.environ.get("DATABASE_TEST_URL"))
#pytest.fixture(scope="module")
def test_app():
app = create_application()
app.dependency_overrides[get_settings] = get_settings_override()
with TestClient(app) as test_client:
yield test_client
As for the offending test itself, that looks like the following:
def test_ping(test_app):
response = test_app.get("/ping")
assert response.status_code == 200
assert response.json() == {"environment": "dev", "ping": "pong", "testing": True}
The container is successfully running on my localhost without issue; this leads me to believe that the issue is wholly related to how I've set up the test and its associated config. However, the structure of the error and how database_url is wrapping up all these key-value pairs from docker-compose.yml gives me the sense that my syntax error could be elsewhere.
At this juncture, I'm not sure if the issue has something to do with how I set up test_ping.py, my construction of the settings_override, with the format of my docker-compose.yml file, or something else altogether.
So far, I've tried to fix this issue by reading up on the use of dependency overrides in FastApi, noodling with my indentation in the docker-compose, changing the TestClient from one provided by starlette to that provided by FastAPI, and manually entering testing mode.
Something I noticed when attempting to manually go into testing mode was that the container doesn't want to follow suit. I've tried setting testing to 1 in docker-compose.yml, and testing: bool = True in config.Settings.
I'm new to all of the relevant tech here and bamboozled. What is causing this discrepancy with my test? Any and all insight would be greatly appreciated. If you need to see any other files, or are interested in the package structure, just let me know. Many thanks.
Any dependency override through app.dependency_overrides should provide the function being overridden as the key and the function that should be used instead. In your case you're assigning the correct override, but you're assigning the result of the function as the override, and not the override itself:
app.dependency_overrides[get_settings] = get_settings_override()
.. this should be:
app.dependency_overrides[get_settings] = get_settings_override
The error message shows that FastAPI tried to call your dictionary as a function, something that hints to it expecting a function instead.
We have two source of inputs to create a Batch first is Folder Import and second is Email import.
I need to add condition where if the source of image is Email it should not allow to rotate the image and like wise if source if Folder import it should rotate the image.
I have added a script for this in KTM.
It is showing proper message of the source of image but it is not stopping the rotation of the image.
Below check the below script for reference.
Public Function setRotationRule(ByVal pXDoc As CASCADELib.CscXDocument) As String
Dim i As Integer
Dim FullPath As String
Dim PathArry() As String
Dim xfolder As CscXFolder
Set xfolder = pXDoc.ParentFolder
While Not xfolder.IsRootFolder
Set xfolder = xfolder.ParentFolder
Wend
'Added for KTM script testing
FullPath= "F:\Emailmport\dilipnikam#gmail.com_09-01-2014_10-02-37\dfdsg.pdf"'
If xfolder.XValues.ItemExists("AC_FIELD_OriginalFileName") Then
FullPath= xfolder.XValues.ItemByName("AC_FIELD_OriginalFileName").Value
End If
PathArry() = Split(FullPath,"\")
MsgBox(PathArry(1))
If Not PathArry(1) = "EmailImport" Then
For i = 0 To pXDoc.CDoc.Pages.Count - 1
pXDoc.CDoc.Pages(i).Rotation = Csc_RT_NoRotation
Next i
End If
End Function
The KTM Scripting Help has a misleading topic named "Dynamically Suppress Orientation Detection for Full Page OCR" where it shows setting Csc_RT_NoRotation from the Document_AfterClassifyXDoc event.
The reason I think this is misleading is because rotation may already have occurred before that event and thus setting the property has no effect. This can happen if layout classification has run, or if OCR has run (which can be triggered by content classification, or if any project-level locators need OCR). The sample in that topic does suggest that it is only for use when classifiers are not used, but it could be explained better.
The code you've shown would be best called from the event Document_BeforeProcessXDoc. This will run before the entire classify phase (including project-level locators), ensuring that rotation could not have already occurred.
Of course, also make sure this isn't because of a typo or anything else preventing the code from actually executing, as mentioned in the comments.
I've setup teamcity with my sln file and got the unit tests to show up with the CppUnit plugin that teamcity has. And I get test results in the TeamCity UI.
Now I'm trying to get trending reports to show up for my unit tests and code coverage.
As of code coverage, we're using vsinstr.exe and vsperfmon.exe which produces an XML file.
I'm not quite sure as of what steps I should be taking to make the trending reports and code coverage(not as important) to show up.
I've already seen this post, but the answer seems to require editing the build script, which I don't think would work for my case since I'm building through MSBuild and the .sln file, and the tests are being ran through that build.
So basically I'm trying to get the Statistics tab to show up, and I'm not sure where to begin.
Just add simple Powershell step into your build configuration. Something like this:
function TeamCity-SetBuildStatistic([string]$key, [string]$value) {
Write-Output "##teamcity[buildStatisticValue key='$key' value='$value']"
}
$outputFile = 'MetricsResults.xml'
$xml = [xml] (Get-Content $outputFile)
$metrics = $xml.CodeMetricsReport.Targets.Target[0].Modules.Module.Metrics
$metrics.Metric
| foreach { TeamCity-SetBuildStatistic "$($_.Name)" $_.Value.Replace(',', '') }
It uses XML output from FxCop Metrics. You have to update the script for your actual schema.
I am aware that Frank tool provide option for capturing screenshot. But it is a user defined step.
Taking a screenshot of the app:
Then /^I save a screenshot with prefix (\w+)$/ do |prefix|
filename = prefix + Time.now.to_i.to_s
%x[screencapture #{filename}.png]
end
But is there any other possibility of by default saving the screenshot in case of unexpected test failure ?
Assuming the application is still open, you could use an After hook to call the Frank step that takes screenshots if the test failed.
Try this:
After do |scenario|
if scenario.failed?
steps %Q{
Then I save a screenshot with prefix test
}
end
end