HOST NOT RESOLVABLE seen ONLY for FIRST request with RemoteWebDriver - docker

HOST NOT RESOLVABLE seen ONLY for FIRST request with selenium-server-standalone 2.41(Machine A) RemoteWebDriver, Firefox 28 on Machine B alone with hub and node on the same MACHINE B.
The debugging session is going on from two days with no concrete outcome. Can anyone please point us in the right direction?
Are we missing anything as part of setup here? What is the correct way to make use of selenium-server-standalone 2.41 with Firefox 28 for RemoteWebDriver usecase?
Maven Dependency
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-server-standalone</artifactId>
<version>2.41.0</version>
</dependency>
SETUP AND EXECUTION DETAILS
We have two machines Machine A (ARM64) , Machine B(Linux X86).
The way we are making use of it now is as follows,
Machine A(Linux ARM64) is where RemoteWebDriver invocation occurs, selenium-server-standalone-2.41.0.jar is used.
Machine B(Linux x86), we have a running docker container acts as both hub and node, Expose 4444 port from CONTAINER to HOST MACHINE B
java -jar /u01/selenium/selenium-server-standalone-2.44.0.jar -role hub
java -jar /u01/selenium/selenium-server-standalone-2.44.0.jar -role node -hub http://localhost:4444/grid/register
Access the HOST:port from ARM based machine
OUTPUT SEEN
First connection results in WebDriver Exception, HOST NOT RESOLVABLE, however, subsequent connetions results in no expections, everything just works after first request failure. Here, Geckodriver is not used as we are making use of selenium 2.41, as per MOZ documention
https://firefox-source-docs.mozilla.org/testing/geckodriver/Support.html
CODE USED
The below code is executed from MACHINE A.
import java.net.MalformedURLException;
import java.net.URL;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
public class Main {
public static void main(String[] args) throws Exception {
DesiredCapabilities capability = DesiredCapabilities.firefox();
WebDriver driver = new RemoteWebDriver(new URL("http://<<MACHINEB>>:4444/wd/hub"), capability);
driver.manage().window().maximize();
driver.get("http://localhost:4444");
Thread.sleep(10000);
driver.close();
}
}

I guess the issue here is not with your Selenium code rather with the server you are calling. Probably it is initially "sleeping" and "wakes up" after it detects some request coming to it.

Related

Testcontainers fix bootstrapServers port for Kafka

I want to specify a custom port from the config for the TestContainers Kafka image
To be able to reuse bootstrapservers param later for the black box application testing
using https://github.com/testcontainers/testcontainers-scala Kafka module
Did not find an API for fixing the port while running the container, all I found is the port is dynamically assigned to the container
class KafkaSetUpSpec extends AnyFlatSpec with TestContainerForAll with Matchers {
override val containerDef: KafkaContainer.Def = KafkaContainer.Def()
import org.testcontainers.Testcontainers
//Testcontainers.exposeHostPorts(9092)
it should "return Kafka connection options for kafka container" in withContainers { kafkaContainer =>
kafkaContainer.bootstrapServers.nonEmpty shouldBe true
kafkaContainer.bootstrapServers.split(":")(2).toInt shouldBe kafkaContainer.container.getMappedPort(9093)
}
All I need is to take the connection URL from the config and fix it in the Kafka container like a port, do you have any idea how to do it?
How do assign the same port from the outside world?
Addition info that client not in the same network and located localy
Testcontainers is providingg dynamic port mapping for all modules by design. You have to use the provided kafkaContainer.getBootstrapServer() after the container has been started to get the dynamically mapped port. This needs to be injected into your system under test afterwards.
You can make use of the experimental reusable mode, to reuse a Testcontainers instrumented container across JVMs.
Add testcontainers.reuse.enable=true to the ~/.testcontainers.properties file on your local machine and set withReuse(true) on the KafkaContainer. Note that reusable mode currently does not support Docker networks.
See further examples in the corresponding PR:
https://github.com/testcontainers/testcontainers-java/pull/1781

SEVERE: https://jenkins.domainname.com/tcpSlaveAgentListener/ appears to be publishing an invalid X-Instance-Identity

We're trying to connect a previously connected agent to a Jenkins server.
We get the following error:
SEVERE: https://jenkins.domainname.com/tcpSlaveAgentListener/ appears to be publishing an invalid X-Instance-Identity.
java.io.IOException: https://jenkins.domainname.com/tcpSlaveAgentListener/ appears to be publishing an invalid X-Instance-Identity.
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:287)
at hudson.remoting.Engine.innerRun(Engine.java:694)
at hudson.remoting.Engine.run(Engine.java:519)
The command to run the agent is:
java -jar agent.jar -jnlpUrl http://${private_ip}:8080/computer/mac/slave-agent.jnlp -secret ${secret} -workDir "/var/jenkins-sign"
We're running on a MacOS.
All TCP ports are open internally between the mac and the ${private_ip}. I have telnet working.
As said, this agent was recently connected to the server, but the agent had a restart. We also upgraded the Jenkins server to latest available version.
I updated the agent.jar file.
I think it's related to contacting ${private_ip} while the X-Instance-Identit says "jenkins.dommainame.com", but I am not sure how to resolve it.
I only saw that there were recently changes in this area, but not a lot of helpful information other than that.
Does anyone have an idea?
In case anyone else runs into the issue, in my case it was because I was passing the entire URL, i.e.
http://someurl/jenkins/computer/test/slave-agent.jnlp
and what it really wanted was
http://someurl/jenkins/
Unfortunately, I think it's related to Jenkins upgrade that caused this, and I'm not sure there's a better solution than what I found.
Putting here my solution, but if anyone knows something better, I'd be happy to hear about it :)
Download the agent.jar
Download the slave-agent.jnlp and modify it:
Change all occurrences of https://jenkins.mydomain.com to http://[private_ip]:[port].
Start the process: java -jar agent.jar -jnlpUrl "file:/path/to/dir/slave-agent.jnlp" -workDir "/path/to/dir"
Do not add the secret to this command.
If you're using Jenkins agent as a service, remove the -secret argument from the file.
Set global environment variable jenkins.agent.inboundUrl to your private adderess (with http/https) + port + suffix (if set).
This value will be used as url in JNLP file. This enables using a private address for inbound tcp agents, separate from Jenkins root URL.
see: https://issues.jenkins.io/browse/JENKINS-63222

DTLSv1_listen unable to accept second client in a docker container

I'm experiencing an issue with OpenSSL/DTLS server.
Environment: docker container based on CentOs7
OpenSSL version: OpenSSL-1.1.1d
A DTLS server (non-blocking) using DTLSv1_Listen having a UDP socket with SO_REUSEADDR is unable to accept a second
client connection when it has already been accepted a client connection and serving it.
When the first client has finished, the second client connection is accepted.
I have used the dtls_udp_echo.c (taken from http://web.archive.org/web/20150617012520/http://sctp.fh-muenster.de/dtls-samples.html ) to carry out the test and reproduce the issue.
The test application has been compiled and executed within a docker container, having CentOS7 as base image, but the behaviour has been noticed with other base images OS too (e.g. Redhat, Ubuntu, Debian, SLES).
The same application running on a bare metal works without any issue.
Is there any known compatibility issue between Docker and OpenSSL/DTLS?
Is there any specific configuration to be done to overcome this issue?
Best Regards

YarnCluster constructor hangs in dask-yarn

Im using dask-yarn version 0.3.1. Following the basic example on https://dask-yarn.readthedocs.io/en/latest/.
from dask_yarn import YarnCluster
from dask.distributed import Client
# Create a cluster where each worker has two cores and eight GB of memory
cluster = YarnCluster(environment='environment.tar.gz',
worker_vcores=2,
worker_memory="8GB")
The application is successfuly submitted to cluster but control does not return to console after YarnCluster constructor. The following is the final output from starting.
18/09/19 16:14:24 INFO skein.Daemon: Submitting application...
18/09/19 16:14:24 INFO impl.YarnClientImpl: Submitted application application_1534573350864_34823
18/09/19 16:14:27 INFO skein.Daemon: Notifying that application_1534573350864_34823 has started. 1 callbacks registered.
18/09/19 16:14:27 INFO skein.Daemon: Removing callbacks for application_1534573350864_34823
One thing I noticed when I was initially testing from within docker container was an exception related to grpc not parsing http_proxy environment variable. When running from dedicated cluster edge node, I don't see this exception but also don't see control returned after Constructor.

ASP.NET Core MVC on Windows IoT (Raspbian Pi)

I am new to Windows IoT and am trying to get my first dot net core app running on a Raspberry Pi.
It is not because I think the Raspbian Pi is the perfect place to host web sites, but my ambition is to implement a measurement and control system on the Pi, and make the whole thing accessible through a REST API.
First things first, I wanted to create a standard dot net core app from the VS2017 template and get it running on the Pi.
The template built an App that was available on http://localhost:62100.
As I knew from previous experiments, the app was only listening on localhost, so I modified the Program class as follows:
public class Program
{
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
public static IWebHost BuildWebHost(string[] args)
{
var configuration = new ConfigurationBuilder()
.AddCommandLine(args)
.Build();
var hostUrl = configuration["hosturl"];
if (string.IsNullOrEmpty(hostUrl))
hostUrl = "http://0.0.0.0:62100";
return new WebHostBuilder()
.UseKestrel()
.UseUrls(hostUrl)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseConfiguration(configuration)
.Build();
}
}
And now the server was also usable from my phone using the PC’s IP address.
In order to prepare the project for the Pi, I opened a PowerShell in the folder, where the project file resides (in my case C:\Users\\Documents\Visual Studio 2017\Projects\AspNetCore\AspNetCore) and ran the command:
dotnet publish -r win10arm
And then I copied all of the files placed under bin\Debug\netcoreapp2.0\win10-arm\publish to the Pi and start the application from the PowerShell connected to the Pi:
.\AspNetCore.exe
The Pi (after some deep thought) answers as when it was run on the PC:
[192.168.200.106]: PS C:\CoreApplications\AspNetCore> .\AspNetCore.exe
Hosting environment: Production
Content root path: C:\CoreApplications\AspNetCore
Now listening on: http://0.0.0.0:62100
Application started. Press Ctrl+C to shut down.
But trying to access the server from my browser (http://192.168.200.106:62100) times out with ERR_CONNECTION_TIMED_OUT.
What am I missing?
You need to add the port to firewall with following cmdlet in powershell,by default ASP.Net Core binds to http://localhost:5000.
netsh advfirewall firewall add rule name="ASPNet Core 2 Server Port" dir=in action=allow protocol=TCP localport=62100

Resources