How to run TestCafe runner class with Docker - docker

I am new to TestCafe and want to run my testcases with runner class in Docker container.
I am able to run single testcase through Docker. But when i want to run with runner class, i am not able to do that.
I have followed this thread https://github.com/DevExpress/testcafe/issues/2761 but i don't know how to define "Environment initialization steps from the TestCafe Docker script in your runner."
my runner file
const testCafe = require('testcafe');
const fs = require('fs');
function runTest(tcOptions) {
testCafe('localhost', 8080, 8081)
.then(function(tc) {
let runner = tc.createRunner();
return runner
.src(['./apps/tests/*.test.js'])
.browsers(['firefox'])
.concurrency(4)
.reporter([
'list',
{
name: 'html',
output: './dist/testcafe/testCafe-report.html'
}
])
.screenshots('./dist/testcafe/testcafe-screenshots', true)
.run(tcOptions)
.catch(function(error) {
console.log(error);
});
})
.then(failedCount => {
if (failedCount > 0) console.log('Error Tests failed: ' + failedCount);
else console.log('All Desktop Tests Passed');
process.exit(0);
})
}
const tcOptions = {
debugMode: false
};
runTest(tcOptions);
and running this Docker command
docker run -v `pwd`/:/tests -e "NODE_PATH=/tests/node_modules" -it --entrypoint node testcafe/testcafe /tests/apps/testcafe//testcafe-desktop-run.js
{ Error: No tests to run. Either the test files contain no tests or the filter function is too restrictive.
at Bootstrapper._getTests (/tests/node_modules/testcafe/src/runner/bootstrapper.js:92:19) code: 'E1009', data: [] }

You need to define the full path to your test files or change your working directory to the /tests directory in the container.
Besides, this step is intended to run the in-memory display server. You may skip it if you are going to run tests in a headless browser.
Here is a command that works on my side with the Runner class:
docker run -v //c/Users/User/test-docker:/tests -w=/tests -it --entrypoint node testcafe/testcafe /tests/test-run.js

Related

Playwright is not running tests on Selenium Grid + Docker

I'm trying to run some Playwright (Node.js) tests on Docker through Selenium Grid, but the tests are failing because of timeout. Chrome is apparently not starting. Locally without Docker everything is fine.
According to the Playwright documentation, it would be enough to run the tests using the following command:
SELENIUM_REMOTE_URL=http://localhost:4444/wd/hub npx playwright test
But that is not working.
I'm building the environment in Docker by running the following file through Powershell:
$maxNodes = 1
function GetImages()
{
docker pull selenium/hub:latest
docker pull selenium/node-chrome:latest
}
function CreateGrid()
{
docker network create grid
}
function CreateHub()
{
docker run -d -p 4442-4444:4442-4444 --net grid --name hub selenium/hub:latest
}
function CreateNodes()
{
$nodes = 1
while($nodes -le $maxNodes)
{
docker run -d -P --net grid -e SE_EVENT_BUS_HOST=hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 -e SE_NODE_SESSION_TIMEOUT=120 selenium/node-chrome:latest
$nodes++
}
}
cls
GetImages
CreateGrid
CreateHub
CreateNodes
cls
Write-Host "HUB AND NODES CREATED!!"
After running the tests using the above mentioned command, the result is as follows.
In Docker Selenium Grid, the following is displayed.
The session is "empty" with nothing running. The browser has a blank page.
The config from playwright.config.ts is:
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 1 : 0,
timeout: 20000,
workers:1,
use: {
baseURL: 'https://www.google.com',
viewport: { width: 1280, height: 720 },
browserName: "chromium",
channel: "chrome",
headless: false
}
};
export default config;
And the test running is:
test('Acessar Google', async ({ page }) => {
await page.goto('/');
const length = await page.locator('input[type=submit]').count();
expect(length >= 1).toBeTruthy();
});
I don't know what could be going wrong.
Can anyone help me?

Running Karate UI tests with “driverTarget” in GitLab CI

Question was:
I would like to run Karate UI tests using the driverTarget options to test my Java Play app which is running locally during the same job with sbt run.
I have a simple assertion to check for a property but whenever the tests runs I keep getting "description":"TypeError: Cannot read property 'getAttribute' of null This is my karate-config.js:
if (env === 'ci') {
karate.log('using environment:', env);
karate.configure('driverTarget',
{
docker: 'justinribeiro/chrome-headless',
showDriverLog: true
});
}
This is my test scenario:
Scenario: test 1: some test
Given driver 'http://localhost:9000'
waitUntil("document.readyState == 'complete'")
match attribute('some selector', 'some attribute') == 'something'
My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it.
Is there any workaround for this? thanks
A docker container cannot talk to localhost port as per what was posted: "My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it."
To get around this and have docker container communicate with running app on localhost port use command host.docker.internal
Change to make:
From: Given driver 'http://localhost:9000'.
To: Given driver 'http://host.docker.internal:9000'
Additionally, I was able to use the ptrthomas/karate-chrome image in CI (GITLAB) by inserting the following inside my gitlab-ci.yml file
stages:
- uiTest
featureOne:
stage: uiTest
image: docker:latest
cache:
paths:
- .m2/repository/
services:
- docker:dind
script:
- docker run --name karate --rm --cap-add=SYS_ADMIN -v "$PWD":/karate -v
"$HOME"/.m2:/root/.m2 ptrthomas/karate-chrome &
- sleep 45
- docker exec -w /karate karate mvn test -DargLine='-Dkarate.env=docker' Dtest=testParallel
allow_failure: true
artifacts:
paths:
- reports
- ${CLOUD_APP_NAME}.log
my karate-config.js file looks like
if (karate.env == 'docker') {
karate.configure('driver', {
type: 'chrome',
showDriverLog: true,
start: false,
beforeStart: 'supervisorctl start ffmpeg',
afterStop: 'supervisorctl stop ffmpeg',
videoFile: '/tmp/karate.mp4'
});
}

Jenkins. Running docker containers in parallel (declarative)

I want to run two docker container in declarative Jenkins pipeline, because I have container with backend which utilises Selenium server container for test. I know that there is a scripted example but I wonder if there is a declarative option.
Scripted looks like this:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
In the end I used description from here.
withRun - executes commands on the host
inside - inside the container
stage ('Test') {
steps {
// Create network where I will connect all containers
sh 'docker network create test'
script {
//withRun command starts the container and doesn't stop it untill all inside is executed.
//Commands inside are executed on HOST machine
docker.image('selenium/standalone-chrome').withRun("-p 4444:4444 --name=selenium -itd --network=test") {
docker.image("$CONTAINER_NAME:front").withRun("-p 3001:80 --name=front -itd --network=test") {
//We start backend container...
docker.image("$CONTAINER_NAME:back").withRun("-p 8001:80 --name=back -itd --network=test") {
//...and with inside command execute commands *surprise* inside the container
docker.image("$CONTAINER_NAME:back").inside("-itd --network=test") {
//execute commands inside the container
}
}
}
}
}
}
}

How do I set up postgres database in Jenkins pipeline?

I am using docker to simulate postgres database for my app. I was testing it in Cypress for some time and it works fine. I want to set up Jenkins for further testing, but I seem stuck.
On my device, I would use commands
docker create -e POSTGRES_DB=myDB -p 127.0.0.1:5432:5432 --name myDB postgres
docker start myDB
to create it. How can I simulate this in Jenkins pipeline? I need the DB for the app to work.
I use Dockerfile as my agent, and I have tried putting the ENV variables there, but it does not work. Docker is not installed on the pipeline.
The way I see it is either:
Create an image by using a
Somehow install docker inside the pipeline and use the same commands
Maybe with master/slave nodes? I don't understand them well yet.
This might be a use case for sidecar pattern one of Jenkins Pipeline's advanced features.
For example (from the above site):
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by withRun, which has the
running container’s ID available via the id property. Using the
container’s ID, the Pipeline can create a link by passing custom
Docker arguments to the inside() method.
Best thing is that the containers should be automatically stopped and removed when the work is done.
EDIT:
To use docker network instead you can do the following (open Jira to support this OOTB). Following helper function
def withDockerNetwork(Closure inner) {
try {
networkId = UUID.randomUUID().toString()
sh "docker network create ${networkId}"
inner.call(networkId)
} finally {
sh "docker network rm ${networkId}"
}
}
Actual usage
withDockerNetwork{ n ->
docker.image('sidecar').withRun("--network ${n} --name sidecar") { c->
docker.image('main').inside("--network ${n}") {
// do something with host "sidecar"
}
}
}
For declarative pipelines:
pipeline {
agent any
environment {
POSTGRES_HOST = 'localhost'
POSTGRES_USER = myuser'
}
stages {
stage('run!') {
steps {
script {
docker.image('postgres:9.6').withRun(
"-h ${env.POSTGRES_HOST} -e POSTGRES_USER=${env.POSTGRES_USER}"
) { db ->
// You can your image here but you need psql to be installed inside
docker.image('postgres:9.6').inside("--link ${db.id}:db") {
sh '''
psql --version
until psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES-=1)) remaining attempts..."
sleep 1
done
'''
sh 'echo "your commands here"'
}
}
}
}
}
}
}
Related to Docker wait for postgresql to be running

Remove docker container at the end of each test

I'm using docker to scale the test infrastructure / browsers based on the number of requests received in Jenkins.
Created a python script to identify the total number of spec files and browser type, and spin-up those many docker containers. Python code has the logic to determine how many nodes are currently in use, stale and it determines the required number of containers.
I want to programmatically delete the container / de-register the selenium node at the end of each spec file (Docker --rm flag is not helping me).So that the next test will get a clean browser and environment.
The selenium grid runs on the same box where Jenkins is. Once I invoke protractor protractor.conf.js (Step 3), selenium grid will start distributing the tests to the containers created in Step 1.
When I say '--rm' is not helping, I mean the after step3 the communication is mainly between selenium hub and the nodes. I'm finding it difficult to determine which node / container was used by the selenium grid to execute the test and remove the container even before the grid sends another test to the container.
-- Jenkins Build Stage --
Shell:
# Step 1
python ./create_test_machine.py ${no_of_containers} # This will spin-up selenium nodes
# Step 2
npm install # install node modules
# Step 3
protractor protractor.conf.js # Run the protractor tests
--Python code to spin up containers - create_test_machine.py--
Python Script:
import sys
import docker
import docker.utils
import requests
import json
import time
c = docker.Client(base_url='unix://var/run/docker.sock', version='1.23')
my_envs = {'HUB_PORT_4444_TCP_ADDR' :'172.17.0.1', 'HUB_PORT_4444_TCP_PORT' : 4444}
def check_available_machines(no_of_machines):
t = c.containers()
noof_running_containers = len(t)
if noof_running_containers == 0:
print("0 containers running. Creating " + str(no_of_machines) + "new containers...")
spinup_test_machines(no_of_machines)
else:
out_of_stock = 0
for obj_container in t:
print(obj_container)
container_ip_addr = obj_container['NetworkSettings']['Networks']['bridge']['IPAddress']
container_state = obj_container['State']
res = requests.get('http://' + container_ip_addr + ':5555/wd/hub/sessions')
obj = json.loads(res.content)
node_inuse = len(obj['value'])
if node_inuse != 0:
noof_running_containers -= 1
if noof_running_containers < no_of_machines:
spinup_test_machines(no_of_machines - noof_running_containers)
return
def spinup_test_machines(no_of_machines):
'''
Parameter : Number of test nodes to spin up
'''
print("Creating " + str(no_of_machines) + " new containers...")
# my_envs = docker.utils.parse_env_file('docker.env')
for i in range(0,no_of_machines):
new_container = c.create_container(image='selenium/node-chrome', environment=my_envs)
response = c.start(container=new_container.get('Id'))
print(new_container, response)
return
if len(sys.argv) - 1 == 1:
no_of_machines = int(sys.argv[1]) + 2
check_available_machines(no_of_machines)
time.sleep(30)
else:
print("Invalid number of parameters")
Here the difference can be seen clearly when docker run with -d and --rm
Using -doption
C:\Users\apps>docker run -d --name testso alpine /bin/echo 'Hello World'
5d447b558ae6bf58ff6a2147da8bdf25b526bd1c9f39117498fa017f8f71978b
Check the logs
C:\Users\apps>docker logs testso
'Hello World'
Check the last run containers
C:\Users\apps>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d447b558ae6 alpine "/bin/echo 'Hello Wor" 35 hours ago Exited (0) 11 seconds ago testso
Finally user have to remove it explicity
C:\Users\apps>docker rm -f testso
testso
Using --rm, the container is vanished including its logs as soon as the process that is
run inside the container is completed. No trace of container any more.
C:\Users\apps>docker run --rm --name testso alpine /bin/echo 'Hello World'
'Hello World'
C:\Users\apps>docker logs testso
Error: No such container: testso
C:\Users\apps>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I believe that it is clear, how to run the container and leaving no trace after process is finished inside of container.
so to start a container in detached mode, you use -d=true or just -d option. By design, containers started in detached mode exit when the root process used to run the container exits. A container in detached mode cannot be automatically removed when it stops, this means you cannot use the --rm option with -d option.
look at this
https://docs.docker.com/engine/reference/run/
You can use nose test. For every "def test_xxx()", it will call the setup and teardown functions with #with_setup decrator. Below is an example:
from nose.tools import *
c = docker.Client(base_url='unix://var/run/docker.sock', version='1.23')
my_envs = {'HUB_PORT_4444_TCP_ADDR' :'172.17.0.1', 'HUB_PORT_4444_TCP_PORT' : 4444}
my_containers = {}
def setup_docker():
""" Setup Test Environment,
create/start your docker container(s), populate the my_containers dict.
"""
def tear_down_docker():
"""Tear down test environment.
"""
for container in my_containers.itervalues():
try:
c.stop(container=container.get('Id'))
c.remove_container(container=container.get('Id'))
except Exception as e:
print e
#with_setup(setup=setup_docker, teardown=tear_down_docker)
def test_xxx():
# do your test here
# you can call a subprocess to run your selenium
Or, you write a separate python script to detect the containers you set up for your test, and then do something like this:
for container in my_containers.itervalues():
try:
c.stop(container=container.get('Id'))
c.remove_container(container=container.get('Id'))
except Exception as e:
print e

Resources