Selenium Grid - How to shutdown the node after execution? - docker

I am trying to implement a solution that'd shutdown the node running inside a docker (Swarm) container after a test run.
I looked at docker remove command but cannot use the docker container rm command as the containers are at the service-task level
I looked at the /lifecycle-manager api but cannot get to the node from client, the docker stack is running through a nginx server and only one port(4444) gets exposed
Finally I looked at extended the grid node (DefaultRemoteProxy). Excuse my bad java code, this is my first stab at writing java code. With this, it looks like I can stop the node but it gets registered to the hub
How can i stop this re-registration process or start the node without it
My goal is to have a new container for every test and let the docker orchestration bring up a new container when the node is shutdown and container gets removed (docker api https://docs.docker.com/engine/api/v1.24/)
public class ExtendedProxy extends DefaultRemoteProxy implements TestSessionListener {
public ExtendedProxy(RegistrationRequest request, GridRegistry registry) {
super(request, registry);
}
#Override
public void afterCommand(TestSession session, HttpServletRequest request, HttpServletResponse response) {
RequestType type = SeleniumBasedRequest.createFromRequest(request, getRegistry()).extractRequestType();
if(type == STOP_SESSION) {
System.out.println("Going to Shutdown the Node");
GridRegistry registry = getRegistry();
registry.stop();
registry.removeIfPresent(this);
}
}
}
Hub
[DefaultGridRegistry.assignRequestToProxy] - Shutting down registry.
[DefaultGridRegistry.removeIfPresent] - Cleaning up stale test sessions on the unregistered node
[DefaultGridRegistry.add] - Registered a node
Node
[ActiveSessions$1.onStop] - Removing session de04928d-7056-4b39-8137-27e9a0413024 (org.openqa.selenium.firefox.GeckoDriverService)
[SelfRegisteringRemote.registerToHub] - Registering the node to the hub: http://localhost:4444/grid/register
[SelfRegisteringRemote.registerToHub] - The node is registered to the hub and ready to use

I figured out the solution. I am answering my own question, hoping it'd benefit the community.
Start the node with the command line flag. This stops the auto registration thread from ever getting created.
registerCycle - 0
And in your class that extends DefaultRemoteProxy, override the afterSession
#Override
public void afterSession(TestSession session) {
totalSessionsCompleted++;
GridRegistry gridRegistry = getRegistry();
for(TestSlot slot : getTestSlots()) {
gridRegistry.forceRelease(slot, SessionTerminationReason.PROXY_REREGISTRATION);
}
teardown();
gridRegistry.removeIfPresent(this);
}
When the client executed the driver.quit() method, the node de-registers with the hub.

Related

.war on tomcat on docker, 404 on servlet

I finally found the motivation to work with Docker : I tried to deploy a basic "hello-world" servlet, on a tomcat running on a docker container.
This servlet works perfectly when I run it on the Tomcat started by intelliJ.
But when I use it with Docker, using this Dockerfile
FROM tomcat:latest
ADD example.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
And I build/start the image/container:
docker build -t example .
docker run -p 8090:8080 example
The index.jsp is displayed correctly at localhost:8090/example/, but I get a 404 when trying to access the servlet at localhost:8090/example/hello-servlet
At the same time, I can access localhost:8080/example/hello-servlet, when my non dockerized tomcat runs, and it works well.
Here is the servlet code :
package io.bananahammock.bananahammock_backend;
import java.io.*;
import javax.servlet.http.*;
import javax.servlet.annotation.*;
#WebServlet(name = "helloServlet", value = "/hello-servlet")
public class HelloServlet extends HttpServlet {
private String message;
public void init() {
message = "Hello World!";
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("<html><body>");
out.println("<h1>" + message + "</h1>");
out.println("</body></html>");
}
public void destroy() {
}
}
What am I missing?
Since August 31, 2021 (this commit) the Docker image tomcat:latest uses Tomcat 10 (see the list of available tags).
As you are probably aware, software which uses the javax.* namespace does not work on Jakarta EE 9 servers such as Tomcat 10 (see e.g. this question). Therefore:
if it is a new project, migrate to the jakarta.* namespace and test everything on Tomcat 10 or higher,
if it is a legacy project, use another Docker image, e.g. the tomcat:9 tag.

Is there a way to configure a docker container (testcontainer) with the mapped port before it starts?

I have a testcontainer that creates a Oryd/Hydra container in a Junit 4 test.
#Bean
public GenericContainer hydra() {
WaitStrategy waitStrategy = Wait.forHttp("/health/ready").forStatusCode(200);
GenericContainer hydra =
new GenericContainer("oryd/hydra:1.4.8")
.withCommand("serve all --dangerous-force-http")
.withEnv("URLS_SELF_ISSUER", "http://127.0.0.1:4444/")
.withEnv("DSN", "memory")
.withEnv("SECRETS_SYSTEM", "youReallyNeedToChangeThis")
.withEnv("OIDC_SUBJECT_IDENTIFIERS_SUPPORTED_TYPES", "public,pairwise")
.withEnv("OIDC_SUBJECT_IDENTIFIERS_PAIRWISE_SALT", "youReallyNeedToChangeThis")
.withEnv("STRATEGIES_ACCESS_TOKEN", "jwt")
.withEnv("OIDC_SUBJECT_IDENTIFIERS_SUPPORTED_TYPES", "public")
.withEnv("URLS_CONSENT", "http://127.0.0.1:3000/consent")
.withEnv("URLS_LOGIN", "http://127.0.0.1:3000/login")
.withExposedPorts(4444, 4445)
.waitingFor(waitStrategy)
.withNetwork(network)
.withLogConsumer(consumer);
hydra.start();
return hydra;
}
The problem is with the environment variable "URLS_SELF_ISSUER". The clients of the Hydra server, validate that the URL of the server matches the value of "URLS_SELF_ISSUER". Its value should match the URL exposed to its clients, however testcontainers bind exposed port 4444 to a random port. So, URL will almost always be different from 127.0.0.1:4444.
This is a chicken and egg problem. I don't know what the port is until after the container starts, and then it's too late to update the variable.
Is there a way to know the exposed port so I can configure the container variable "URLS_SELF_ISSUER
" with the right URL??

gRPC streaming procedure returning "Method is unimplemented." when running in Azure Container Instance

I have a gRPC service defined and implemented in dotnet core 3.1 using C#. I have a stream call defined like so:
service MyService {
rpc MyStreamingProcedure(Point) returns (stream ResponseValue);
}
In the service it is generated
public virtual global::System.Threading.Tasks.Task MyStreamingProcedure(global::MyService.gRPC.Point request, grpc::IServerStreamWriter<global::MyService.gRPC.ResponseValue> responseStream, grpc::ServerCallContext context)
{
throw new grpc::RpcException(new grpc::Status(grpc::StatusCode.Unimplemented, ""));
}
In my service it is implemented by overriding this:
public override async Task MyStreamingProcedure(Point request, IServerStreamWriter<ResponseValue> responseStream, ServerCallContext context)
{
/* magic here */
}
I have this building in a docker container, and when I run it on localhost it runs perfectly:
docker run -it -p 8001:8001 mycontainerregistry.azurecr.io/myservice.grpc:latest
Now here is the question. When I run this in an Azure Container Instance and call the client using a public IP address, the call fails with
Unhandled exception. Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="Method is unimplemented.")
at Grpc.Net.Client.Internal.HttpContentClientStreamReader`2.MoveNextCore(CancellationToken cancellationToken)
It appears that it is not seeing the override and is running the procedure in the base class. The unary call on the same gRPC service works fine using the container running in public ACI. Why would the streaming call behave differently on localhost and running over a public IP address?
I got the same error, because I had not registered service.
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<MyService>();
});

scdf 1.7.3 docker k8s #Bean no running, no logs

As a user, writing a processor as a cloud function,
scdf 1.7.3, spring boot 1.5.9, spring-cloud-function-dependencies 1.0.2,
public class MyFunctionBootApp {
public static void main(String[] args) {
SpringApplication.run(MyFunctionBootApp.class,
"--spring.cloud.stream.function.definition=toUpperCase");
}
#Bean
public Function<String, String> toUpperCase() {
return s -> {
log.info("received:=" + s);
return ( (s+"jsa").toUpperCase());
};
}
}
i've create a simple stream => time | function-runner | log
function-runner-0.0.6.jar at nexus is ok
docker created ok,
Container entrypoint set to [java, -cp, /app/resources:/app/classes:/app/libs/*, function.runner.MyFunctionBootApp]
No time message from time pod arrived to function-runner processor executing toUpperCase function
No logs
I am checking deploying using , app.function-runner.spring.cloud.stream.function.definition=toUpperCase, #FunctionalScan
any clues?
We discussed function-runner being deprecated in favor of native support of Spring Cloud Function in Spring Cloud Stream. See: scdf-1-7-3-docker-k8s-function-runner-not-start. Please don't duplicate post it.
Also, you're on a very old Spring Boot version (v1.5.9 - at least 1.5yrs old). More importantly, Spring Boot 1.x is in maintenance-only mode, and it will be EOL by August 2019. See: spring-boot-1-x-eol-aug-1st-2019. It'd be good that you upgrade to 2.1.x latest.

Clustered Vert.x not working in docker if vert.x instance is created manually

I setup a small project for learning the capabilities of Vert.x in a cluster environment but I'm facing some weird issues when I try to create the vertx instance inside a Docker image.
The project consists just in 2 verticles being deployed in different Docker containers and using the event bus to communicate with each other
If I use the vertx provided launcher:
Launcher.executeCommand("run", verticleClass, "--cluster")
(or just stating that the main class is io.vertx.core.Launcher and putting the right arguments)
Everything works both locally and inside docker images. But if I try to create the vertx instance manually with
Vertx.rxClusteredVertx(VertxOptions())
.flatMap { it.rxDeployVerticle(verticleClass) }
.subscribe()
Then it's not working in Docker (it works locally). Or, more visually
| | Local | Docker |
|:---------------: |:-----: |:------: |
| Vertx launcher | Y | Y |
| Custom launcher | Y | N |
By checking the Docker logs it seems that everything works. I can see that both verticles know each other:
Members [2] {
Member [172.18.0.2]:5701 - c5e9636d-b3cd-4e24-a8ce-e881218bf3ce
Member [172.18.0.3]:5701 - a09ce83d-e0b3-48eb-aad7-fbd818c389bc this
}
But when I try to send a message through the event bus the following exception is thrown:
WARNING: Connecting to server localhost:33845 failed
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:33845
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:325)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 11 more
Just for simplifying stuff I uploaded the project to Github. I tried to make it as simple as possible so it has 2 verticles & 2 main classes and lots of scripts for every combination
By checking the Docker logs it seems that everything works. I can see
that both verticles know each other
Yes, because your cluster manager works fine, but you should also make your event bus configuration be consistent in every node (machine/docker container) in your cluster because as mentioned in Vert.x documentation: cluster managers do not handle the event bus inter-node transport, this is done directly by Vert.x with TCP connections.
You have to set the cluster host on each node to be the IP address of this node itself and any arbitrary port number (unless you try to run more than Vert.x instance on the same node you have to choose a different port number for each Vert.x instance). The error you are facing is because the default cluster host is local host
For example if a node's IP address is 192.168.1.12 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.12") // node ip
.setClusterPort(17001) // any arbitrary port but make sure no other Vert.x instances using same port on the same node
.setClusterManager(clusterManager);
on another node whose IP address is 192.168.1.56 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.56") // other node ip
.setClusterPort(17001) // it is ok because this is a different node
.setClusterManager(clusterManager);
By researching a bit more into the Vert.X Launcher code I found out that internally it's doing more things than just "parsing" the inputs
In case vertx is configured to run in cluster mode the Launcher is setting the cluster address itself (via BareCommand class) so in order to replicate the Launcher behaviour with your own Main class (and having the flexibility to configure your vertx instance via code instead of via args) the code will be as follow:
fun main(args: Array<String>) {
Vertx.rxClusteredVertx(
VertxOptions(
clusterHost = getDefaultAddress()
)
)
.flatMap { it.rxDeployVerticle(verticleClass) }
.subscribe()
}
// As taken from io.vertx.core.impl.launcher.commands.BareCommand
fun getDefaultAddress(): String? {
val nets: Enumeration<NetworkInterface>
try {
nets = NetworkInterface.getNetworkInterfaces()
} catch (e: SocketException) {
return null
}
var netinf: NetworkInterface
while (nets.hasMoreElements()) {
netinf = nets.nextElement()
val addresses = netinf.inetAddresses
while (addresses.hasMoreElements()) {
val address = addresses.nextElement()
if (!address.isAnyLocalAddress && !address.isMulticastAddress
&& address !is Inet6Address) {
return address.hostAddress
}
}
}
return null
}

Resources