Dont start all browsers in selenoid with docker-compose - docker

tell me about the docker:
I have Windows 10+WSL2+docker for win, installed selenoid in ubuntu, launched and downloaded the images. (chrome 90,91 etc..)
The container aero cube/selenoid and aerocube/selenoid-ui is successfully launched, the tests in it from IDEA pass with a bang.
I want to run tests in 2 versions of chrome via docker compose.
Config browser.json
{
"chrome": {
"default": "90.0",
"versions": {
"90.0": {
"env" : ["LANG=ru_RU.UTF-8", "LANGUAGE=ru:en", "LC_ALL=ru_RU.UTF-8", "TZ=Europe/Moscow"],
"image": "selenoid/chrome:90.0",
"tmpfs": {"/tmp": "size=512m"},
"hosts": ["x01.aidata.io:127.0.0.1"],
"port": "4444"
},
"91.0": {
"env": ["LANG=ru_RU.UTF-8", "LANGUAGE=ru:en", "LC_ALL=ru_RU.UTF-8", "TZ=Europe/Moscow"],
"image": "selenoid/chrome:91.0",
"tmpfs": {"/tmp": "size=512m"},
"hosts": ["x01.aidata.io:127.0.0.1"],
"port": "4444"
}
}
}
}
Config docker-compose.yaml
version: '3.4'
services:
selenoid:
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/init/selenoid:/etc/selenoid"
- "${PWD}/work/selenoid/video:/opt/selenoid/video"
- "${PWD}/work/selenoid/logs:/opt/selenoid/logs"
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=work/selenoid/video
command: ["-conf", "etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
network_mode: bridge
in IDEA:
#BeforeEach
public void initDriver() throws IOException {
final String url = "http://localhost:4444/wd/hub";
WebDriver driver = new RemoteWebDriver(new URL(url), DesiredCapabilities.chrome());
driver.manage().window().setSize(new Dimension(1920,1024));
WebDriverRunner.setWebDriver(driver);
}
#AfterEach
public void stopDriver() {
Optional.ofNullable(WebDriverRunner.getWebDriver()).ifPresent(WebDriver::quit);
}
It starts only the 90th version (it is the first in browser.json) passes successfully and closes ignoring everything else that needs to be corrected? )

with the docker, everything is OK, we figured it out, you will need to edit the configs of the grad for selenide
close topic

Related

Executing WDIO-Test locally in a docker container throws error: #wdio/cli:utils: A service failed in the 'onPrepare' hook

I'm executing a headless e2e-test locally in a docker-container like so:
docker-compose up
yarn test
I get this error-message at the beginning:
ERROR #wdio/cli:utils: A service failed in the 'onPrepare' hook
TypeError: Cannot read property 'args' of undefined
at DockerLauncher.onPrepare (C:\myProgs\myWDIOTest\node_modules\wdio-docker-service\lib\launcher.js:30:9)
at C:\myWDIOTest\myWDIOTest\node_modules\#wdio\cli\build\utils.js:24:40
at Array.map (<anonymous>)
at Object.runServiceHook (C:\myProgs\myWDIOTest\node_modules\#wdio\cli\build\utils.js:21:33)
at Launcher.run (C:\myProgs\myWDIOTest\node_modules\#wdio\cli\build\launcher.js:61:27)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
I'm not using the onPrepare-Hook the wdio-configuration-file (see below).
The test carries on and actually finishes successfully every time, just like it's supposed to. At this point simply suppressing this error-message would be a viable solution for me (since this error doesn't compromise the test-results).
There exists a solution here for tests run on saucelabs, however that won't work for me. But this makes me wonder if I have to look for the solution in my docker-compose-file:
version: "3"
services:
chrome:
image: selenium/node-chrome:4.0.0-rc-1-prerelease-20210713
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
ports:
- "6900:5900"
selenium-hub:
image: selenium/hub:4.0.0-rc-1-prerelease-20210713
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
This is the contents of my wdio-configuration-file:
import BrowserOptions from "./browserOpts";
import CucumberOptions from "./cucumberOpts";
const fs = require('fs');
const wdioParallel = require('wdio-cucumber-parallel-execution');
const reporter = require('cucumber-html-reporter');
const currentTime = new Date().toJSON().replace(/:/g, "-");
const jsonTmpDirectory = `reports/json/tmp/`;
let featureFilePath = `featureFiles/*.feature`;
let timeout = 30000;
exports.config = {
hostname: 'localhost',
port: 4444,
sync: true,
specs: [
featureFilePath
],
maxInstances: 1,
capabilities: [{
maxInstances: 1,
browserName: "chrome",
'goog:chromeOptions': BrowserOptions.getChromeOpts(),
}],
logLevel: 'error',
bail: 0,
baseUrl: 'http://localhost',
waitforTimeout: timeout,
connectionRetryTimeout: timeout * 3,
connectionRetryCount: 3,
services: ['docker'],
framework: 'cucumber',
reporters: [
['cucumberjs-json', {
jsonFolder: jsonTmpDirectory,
language: 'de'
}]
],
cucumberOpts: CucumberOptions.getDefaultSettings(),
before: function (capabilities, specs) {
browser._setWindowSize(1024, 768)
},
beforeSuite: function (suite) {
console.log(`Suite "${suite.fullTitle}" from file "${suite.file}" starts`);
},
beforeTest: function (test) {
console.log(`Test "${test.title}" starts`);
},
afterTest: function (test) {
console.log(`Test "${test.title}" finished`);
},
onComplete: () => {
console.log('<<< E2E-TEST COMPLETED >>>\n\n');
try {
let consolidatedJsonArray = wdioParallel.getConsolidatedData({
parallelExecutionReportDirectory: jsonTmpDirectory
});
let jsonFile = `${jsonTmpDirectory}report.json`;
fs.writeFileSync(jsonFile, JSON.stringify(consolidatedJsonArray));
let options = {
theme: 'bootstrap',
jsonFile: jsonFile,
output: `reports/html/report-${currentTime}.html`,
reportSuiteAsScenarios: true,
scenarioTimestamp: true,
launchReport: true,
ignoreBadJsonFile: true
};
reporter.generate(options);
} catch (err) {
console.log('err', err);
}
}
};
Your docker-compose.yml should work with Webdriverio if you remove services: ['docker'], from wdio.conf.js.
Based on a comment in this video, services: ['docker'] is needed if you want wdio to instantiate it's own containers.
To get rid of this error message specify separate configuration file for wdio docker and leave services with empty array - for me it worked and wdio knows that it should run tests on the container, hope it helped
services: [],

DbContext.Database.Migrate() not providing password to the backend

My goal is to create seeds of users when the database is created.
I'm using idserver4, with npgsql, docker-compose.
The current behavior creates the database and as well the identityserver user manager tables (AspNetUsers, AspNetUserTokens, AspNetUserRoles, etc..). So I know it's migrating that data to the database. But it skips over the Task of running the User seed because it throws a password exception:
Npgsql.NpgsqlException (0x80004005): No password has been provided but the backend requires one (in MD5)
Here's the code in my Program.cs.
public static void Main(string[] args)
{
var host = CreateHostBuilder(args).Build();
using (var scope = host.Services.CreateScope())
{
var services = scope.ServiceProvider;
try
{
var userManager = services.GetRequiredService<UserManager<User>>();
var roleManager = services.GetRequiredService<RoleManager<IdentityRole>>();
var context = services.GetRequiredService<ApplicationDbContext>();
context.Database.Migrate(); // ERROR HAPPENS HERE
Task.Run(async () => await UserAndRoleSeeder.SeedUsersAndRoles(roleManager, userManager)).Wait(); // I NEED THIS TO RUN
}
catch (Exception ex)
{
var logger = services.GetRequiredService<ILogger<Program>>();
logger.LogError(ex, "Error has occured while migrating to the database.");
}
}
host.Run();
}
Here is the code where it gets the connection string in Startup.cs:
services.AddDbContext<ApplicationDbContext>(options =>
{
options.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"),
b =>
{
b.MigrationsAssembly("GLFManager.App");
});
});
If I use a breakpoint here, it shows that the connection string was obtained along with the user id and password. I verified the password was correct. Or else I don't think it would initially commit the Idserver user manager tables.
Here is my appsettings.json file where the connection string lives:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"DefaultConnection": "Host=localhost;Port=33010;Database=glfdb;User Id=devdbuser;Password=devdbpassword"
}
}
I'm thinking it's somewhere in the docker-compose file where some configuration is not registering. This is the docker-compose file:
version: '3.4'
services:
glfmanager.api:
image: ${DOCKER_REGISTRY-}glfmanagerapi
container_name: "glfmanager.api"
build:
context: .
dockerfile: GLFManager.Api/Dockerfile
ports:
- "33000:80"
- "33001:443"
environment:
- ConnectionStrings__DefaultConnection=Server=glfmanager.db;Database=glfdb;User Id=devdbuser:password=devdbpassword;
- Identity_Authority=http://glfmanager.auth
volumes:
- .:/usr/src/app
depends_on:
- "glfmanager.db"
glfmanager.auth:
image: ${DOCKER_REGISTRY-}glfmanagerauth
container_name: "glfmanager.auth"
build:
context: .
dockerfile: GLFManager.Auth/Dockerfile
ports:
- "33005:80"
- "33006:443"
environment:
- ConnectionStrings__DefaultConnection=Server=glfmanager.db;Database=glfdb;User Id=devdbuser:password=devdbpassword;
volumes:
- .:/usr/src/app
depends_on:
- "glfmanager.db"
glfmanager.db:
restart: on-failure
image: "mdillon/postgis:11"
container_name: "glfmanager.db"
environment:
- POSTGRES_USER=devdbuser
- POSTGRES_DB=glfdb
- POSTGRES_PASSWORD=devdbpassword
volumes:
- glfmanager-db:/var/lib/postresql/data
ports:
- "33010:5432"
volumes:
glfmanager-db:
I used this code from a class I took on backend developing and the code is Identitcal to the project I've built in that, and it works. So I'm stumped as to why this is giving me that password error.
Found the problem. I used a ':' instead of ';' in my docker file between User Id and password

Accessing chrome devtools protocol in docker grid

My tests are running against a docker grid with selenium docker images for hub and chrome. What I am trying to do is access chrome devtools protocols in the chrome node so that I can access/intercept a request.Any help is appreciated
I was able to get it working without docker in my local. But could not figure out a way to connect the devtools in chrome node of docker grid. Below is my docker-compose and code
docker compose
version: "3.7"
services:
selenium_hub_ix:
container_name: selenium_hub_ix
image: selenium/hub:latest
environment:
SE_OPTS: "-port 4445"
ports:
- 4445:4445
chrome_ix:
image: selenium/node-chrome-debug:latest
container_name: chrome_node_ix
depends_on:
- selenium_hub_ix
ports:
- 5905:5900
- 5903:5555
- 9222:9222
environment:
- no_proxy=localhost
- HUB_PORT_4444_TCP_ADDR=selenium_hub_ix
- HUB_PORT_4444_TCP_PORT=4445
- NODE_MAX_INSTANCES=5
- NODE_MAX_SESSION=5
- TZ=America/Chicago
volumes:
- /dev/shm:/dev/shm
Here is sample code how I got it working in local without grid (chrome driver in my mac)
const CDP = require('chrome-remote-interface');
let webDriver = require("selenium-webdriver");
module.exports = {
async openBrowser() {
this.driver = await new webDriver.Builder().forBrowser("chrome").build();
let session = await this.driver.session_
let debuggerAddress = await session.caps_.map_.get("goog:chromeOptions").debuggerAddress;
let AddressString = debuggerAddress.split(":");
console.log(AddressString)
try {
const protocol = await CDP({
port: AddressString[1]
});
} catch (err) {
console.log(err.message)
const {
Network,
Fetch
} = protocol;
await Fetch.enable({
patterns: [{
urlPattern: "*",
}]
});
}
await Fetch.requestPaused(async ({
interceptionId,
request
}) => {
console.log(request)
})
}
return this.driver;
},
}
When it is grid, I just change the way build the driver to below
this.driver = await new webDriver.Builder().usingServer(process.env.SELENIUM_HUB_IP).withCapabilities(webDriver.Capabilities.chrome()).build();
With that I am getting the port number but could not create a CDP session and I get a connection refused error.

Serialization Error using Corda Docker Image

I get the following error (for each node), when I run the command docker-compose up. I configured the network parameters myself as well as the nodes, not using the network bootstrapper.
[ERROR] 08:07:48+0000 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Serialization scheme ([6D696E696D756D], P2
P) not supported. [errorCode=1e6peth, moreInformationAt=https://errors.corda.net/OS/4.0/1e6peth]
I have tried to change the properties in the network-parameters file, yet unsuccessfully so far.
Here are my config files:
myLegalName : "O=Notary, L=London, C=GB"
p2pAddress : "localhost:10008"
devMode : true
notary : {
validating : false
}
rpcSettings = {
address : "notary:10003"
adminAddress : "notary:10004"
}
rpcUsers=[
{
user="user"
password="test"
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyA, L=London, C=GB"
p2pAddress : "localhost:10005"
devMode : true
rpcSettings = {
address : "partya:10003"
adminAddress : "partya:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyB, L=London, C=GB"
p2pAddress : "localhost:10006"
devMode : true
rpcSettings = {
address : "partyb:10003"
adminAddress : "partyb:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
as well as my network-parameters file and my docker-compose.yml file:
minimumPlatformVersion=4
notaries=[NotaryInfo(identity=O=Notary, L=London, C=GB, validating=false)]
maxMessageSize=10485760
maxTransactionSize=524288000
whitelistedContractImplementations {
}
eventHorizon="30 days"
epoch=1
version: '3.7'
services:
Notary:
image: corda/corda-zulu-4.0:latest
container_name: Notary
networks:
- corda
volumes:
- ./nodes/notary_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
PartyA:
image: corda/corda-zulu-4.0:latest
container_name: PartyA
networks:
- corda
volumes:
- ./nodes/partya_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
PartyB:
image: corda/corda-zulu-4.0:latest
container_name: PartyB
networks:
- corda
volumes:
- ./nodes/partyb_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
networks:
corda:
Many thanks in advance for your help!
It looks like it is indeed the issue with missing serialization scheme.
Also, in our most Corda 4.4 release, we have released an official image of the containerized Corda node.
Feel free to check out our most recent guide on how to start a docker format node. https://medium.com/corda/containerising-corda-with-corda-docker-image-and-docker-compose-af32d3e8746c

How to resolve service name to in docker swarm mode for hyperledger composer?

I am using docker swarm mode for hyperledger composer setup and I am new to docker. My fabric is running okay. When I use service names in connection.json file, it results into "REQUEST_TIMEOUT" while installing network. But when I use IP address of host instead of service name all works fine. So,how can I resolve service name/container name?
Here is my peer configuration:
peer1:
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
hostname: peer1.eprocure.org.com
image: hyperledger/fabric-peer:$ARCH-1.1.0
networks:
hyperledger-ov:
aliases:
- peer1.eprocure.org.com
environment:
- CORE_LOGGING_LEVEL=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.eprocure.org.com
- CORE_PEER_ADDRESS=peer1.eprocure.org.com:7051
- CORE_PEER_LOCALMSPID=eProcureMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-ov
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.eprocure.org.com:7051
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_PROFILE_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- /export/composer/genesis-folder:/etc/hyperledger/configtx
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/peers/peer1.eprocure.org.com/msp:/etc/hyperledger/peer/msp
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/users:/etc/hyperledger/msp/users
ports:
- 8051:7051
- 8053:7053
And here is my current connection.json with IP
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://192.168.0.147:7051",
"eventUrl": "grpc://192.168.0.147:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://192.168.0.147:8051",
"eventUrl": "grpc://192.168.0.147:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://192.168.0.147:9051",
"eventUrl": "grpc://192.168.0.147:9053"
}
},
I have tried following before.
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://peers_peer0:7051",
"eventUrl": "grpc://peers_peer0:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://peers_peer1:8051",
"eventUrl": "grpc://peers_peer2:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://peers_peer2:9051",
"eventUrl": "grpc://peers_peer2:9053"
}
}
But this doesn't work.
Can anyone please let me know how can I solve my problem?

Resources