How to properly run spock test using testcontainers in spring boot - spock

I have a spring boot app, with test written using spock and testcontainers (mysql). What I've made is working fine, but it doesn't feel right (f.e. becuase #sql goes for each test iteration so I have to use INSERT IGNORE ... in my sql script. I am also not happy about the trick with static and non-static mysqlcontainer).
I am a total beginner when it comes to testcontainers (and spock actually) so If some could tell me how to make it better using spock, #sql, datasource and testcontainers I would be grateful.
#SpringBootTest
#ContextConfiguration(initializers = Initializer.class)
#Testcontainers
class GeneratorTest extends Specification {
public static MySQLContainer staticMySQLContainer = new MySQLContainer()
.withDatabaseName("test")
.withUsername("test")
.withPassword("test")
#Shared
public MySQLContainer mySQLContainer = mySQLContainer;
static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
void initialize(#NotNull ConfigurableApplicationContext configurableApplicationContext) {
TestPropertyValues values = TestPropertyValues.of(
"spring.datasource.url=" + staticMySQLContainer.getJdbcUrl(),
"spring.datasource.password=" + staticMySQLContainer.getPassword(),
"spring.datasource.username=" + staticMySQLContainer.getUsername()
)
values.applyTo(configurableApplicationContext)
}
}
#Autowired
private CarService carService
#Autowired
private BikeRepository bikeRepository
#Sql("/testdata/insert_into_cars.sql")
def "validate number of doors"(int carId, int expectedNrOfDoors) {
given:
Car car = carService.getById(carId)
expect:
car.getNrOfDoors() == expectedNrOfDoors
where:
carId || expectedNrOfDoors
1 || 3
2 || 3
3 || 5
}
}
Updated (use of jdbc-based containers):
When it comes to JDBC-based containers I'm not sure If I set it up correctly. I've created application-test.properties in test/resources directory. I've put there:
spring.datasource.url=jdbc:tc:mysql:8.0.12://localhost:3306/shop?createDatabaseIfNotExist=true&serverTimezone=UTC
spring.datasource.username=test
spring.datasource.password=test
spring.datasource.driver-class-name=org.testcontainers.jdbc.ContainerDatabaseDriver
and my test class looks like:
#SpringBootTest
#Testcontainers
#TestPropertySource(locations="classpath:application-test.properties")
class GeneratorTest extends Specification {
#Autowired
private CarService carService
<skipped for brevity>
When I try to run the test class I'm keep getting:
2018-11-20 19:10:25.409 2612#DESKTOP-MLK30PF INFO --- [main] 🐳 [mysql:8.0.12].waitUntilContainerStarted : Waiting for database connection to become available at jdbc:mysql://192.168.99.100:32770/shop using query 'SELECT 1' (JdbcDatabaseContainer.java:128)
2018-11-20 19:12:25.420 2612#DESKTOP-MLK30PF ERROR --- [main] 🐳 [mysql:8.0.12].tryStart : Could not start container (GenericContainer.java:297)
org.rnorth.ducttape.TimeoutException: org.rnorth.ducttape.TimeoutException: java.util.concurrent.TimeoutException
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:53)
at org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted(JdbcDatabaseContainer.java:129)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:292)
followed by
2018-11-20 19:12:25.486 2612#DESKTOP-MLK30PF INFO --- [tc-okhttp-stream-242833949] 🐳 [mysql:8.0.12].accept : STDERR: 2018-11-20T18:10:44.918132Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.12' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. (Slf4jLogConsumer.java:32)
2018-11-20 19:12:25.487 2612#DESKTOP-MLK30PF INFO --- [main] 🐳 [mysql:8.0.12].tryStart : Creating container for image: mysql:8.0.12 (GenericContainer.java:253)
2018-11-20 19:12:25.609 2612#DESKTOP-MLK30PF INFO --- [main] 🐳 [mysql:8.0.12].tryStart : Starting container with ID: de75c9dafed8032b84cb827bf43a29c1964bfe4e168422272c9310a4803fd856 (GenericContainer.java:266)
2018-11-20 19:12:25.896 2612#DESKTOP-MLK30PF INFO --- [main] 🐳 [mysql:8.0.12].tryStart : Container mysql:8.0.12 is starting: de75c9dafed8032b84cb827bf43a29c1964bfe4e168422272c9310a4803fd856 (GenericContainer.java:273)
2018-11-20 19:12:25.901 2612#DESKTOP-MLK30PF INFO --- [main] 🐳 [mysql:8.0.12].waitUntilContainerStarted : Waiting for database connection to become available at jdbc:mysql://192.168.99.100:32772/shop using query 'SELECT 1' (JdbcDatabaseContainer.java:128)
for issues with mysql version 8 look here
UPDATE (solved):
As #bsideup pointed out jdbc-based containers is the best solution for this use case. It works perfectly fine as I decsribed above, but I had to change mysql version from 8 to lower.

Have you tried JDBC-based containers?

you can do
def setupSpec(){ //or use setup(){ if not a #Shared Container
System.getProperties()
.putAll([
"spring.datasource.url" : mySQLContainer...,
"spring.datasource.username": mySQLContainer...,
"spring.datasource.password": mySQLContainer...
])
}
then you can get rid of the staticMySQLContainer
and also the Initializer

Related

Container startup failing for testcontainers-scala

I was testing out a simple test function as below using the library provided MySql test container when the container startup failed
class Test extends FlatSpec with ForAllTestContainer {
override val container = MySQLContainer()
it should "temp" in {
assert(1 == 1)
}
}
The stack trace for the error is as shown below
Exception encountered when invoking run on a nested suite - Container startup failed
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:322)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302)
at com.dimafeng.testcontainers.SingleContainer.start(Container.scala:46)
at com.dimafeng.testcontainers.ForAllTestContainer.run(ForAllTestContainer.scala:17)
at com.dimafeng.testcontainers.ForAllTestContainer.run$(ForAllTestContainer.scala:13)
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageNameFuture=java.util.concurrent.CompletableFuture#18920cc[Completed normally], imagePullPolicy=DefaultPullPolicy(), dockerClient=LazyDockerClient.INSTANCE)
at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1265)
at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:600)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:311)
... 18 more
Caused by: java.util.NoSuchElementException: No value present
at java.util.Optional.get(Optional.java:135)
at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:103)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:155)
at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
at org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
at org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:62)
at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:25)
at org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:20)
at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:27)
at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1263)
... 20 more

.war on tomcat on docker, 404 on servlet

I finally found the motivation to work with Docker : I tried to deploy a basic "hello-world" servlet, on a tomcat running on a docker container.
This servlet works perfectly when I run it on the Tomcat started by intelliJ.
But when I use it with Docker, using this Dockerfile
FROM tomcat:latest
ADD example.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
And I build/start the image/container:
docker build -t example .
docker run -p 8090:8080 example
The index.jsp is displayed correctly at localhost:8090/example/, but I get a 404 when trying to access the servlet at localhost:8090/example/hello-servlet
At the same time, I can access localhost:8080/example/hello-servlet, when my non dockerized tomcat runs, and it works well.
Here is the servlet code :
package io.bananahammock.bananahammock_backend;
import java.io.*;
import javax.servlet.http.*;
import javax.servlet.annotation.*;
#WebServlet(name = "helloServlet", value = "/hello-servlet")
public class HelloServlet extends HttpServlet {
private String message;
public void init() {
message = "Hello World!";
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("<html><body>");
out.println("<h1>" + message + "</h1>");
out.println("</body></html>");
}
public void destroy() {
}
}
What am I missing?
Since August 31, 2021 (this commit) the Docker image tomcat:latest uses Tomcat 10 (see the list of available tags).
As you are probably aware, software which uses the javax.* namespace does not work on Jakarta EE 9 servers such as Tomcat 10 (see e.g. this question). Therefore:
if it is a new project, migrate to the jakarta.* namespace and test everything on Tomcat 10 or higher,
if it is a legacy project, use another Docker image, e.g. the tomcat:9 tag.

Spring Boot Admin with Eureka and Docker Swarm

EDIT/SOLUTION:
I've got it, partly thanks to #anemyte's comment. Although the eureka.hostname Property was not the issue at play (though it did warrant correction), looking more closely got me to the true cause of the problem: the network interface in use, port forwarding and (bad) luck.
The services that I chose for this prototypical implementation were those that have port-forwardings in a production setting (I must have unfortunately forgotten to add a port-forwarding to the example service below - dumb, though I do not know if this would have helped).
When a Docker Swarm service has a port forwarding, the container has an additional bridge interface in addition to the overlay interface which is used for internal container-to-container communication.
Unfortunately, the client services were choosing to register with Eureka with their bridge interface IP as the advertised IP instead of the internal swarm IP - possibly because that is what InetAddress.getLocalhost() (which is internally used by Spring Cloud) would return in that case.
This led me to erroneously believe that Spring Boot Admin could reach these services - as I externally could, when it in fact could not as the wrong IP was being advertised. Using cURL to verify this only compounded the confusion as I was using the overlay IP to check whether the services can communicate, which is not the one that was being registered with Eureka.
The (provisional) solution to the issue was setting the spring.cloud.inetutils.preferred-networks setting to 10.0, which is the default IP address pool (more specifically: 10.0.0.0/8) for the internal swarm networks (documentation here). There is also a blacklist approach using spring.cloud.inetutils.ignored-networks, however I prefer not to use it.
In this case, client applications advertised their actual swarm overlay IP to Eureka, and SBA was able to reach them.
I do find it a bit odd that I did not get any error messages from SBA, and will be opening an issue on their tracker. Perhaps I was simply doing something wrong.
(original question follows)
I have the following setup:
Service discovery using Eureka (with eureka.client.fetch-registry=true and eureka.instance.preferIpAddress=true)
Spring Boot Admin running in the same application as Eureka, with spring.boot.admin.context-path=/admin
Keycloak integration, such that:
SBA itself uses a service account to poll the various /actuator endpoints of my client applications.
The SBA UI itself is protected via a login page which expects an administrative login.
Locally, this setup works. When I start my eureka-server application together with client applications, I see the following correct behaviour:
Eureka running on e.g. localhost:8761
Client applications successfully registering with Eureka via IP registration (eureka.instance.preferIpAddress=true)
SBA running at e.g. localhost:8761/admin and discovering my services
localhost:8761/admin correctly redirects to my Keycloak login page, and login correctly provides a session for the SBA UI
SBA itself successfully polling the /actuator endpoints of any registered applications.
However, I have issues replicating this setup inside a Docker Swarm.
I have two Docker Services, let's say eureka-server and client-api - both are created using the same network and the containers can reach each other via this network (via e.g. curl). eureka-server correctly starts and client-api registers with Eureka right away.
Attempting to navigate to eureka_url/admin correctly shows the Keycloak login page and redirects back to the Spring Boot Admin UI after a successful login. However, no applications are registered and I cannot figure out why.
I've attempted to enable more debug/trace logging, but I see absolutely no logs; it's as if SBA is simply not fetching the Eureka registry.
Does anyone know of a way to troubleshoot this behaviour? Has anyone had this issue?
EDIT:
I'm not quite sure which settings may be pertinent to the issue, but here are some of my configuration files (as code snippets since they're not that small, I hope that's OK):
application.yaml
(Includes base eureka properties, SBA properties, and Keycloak properties for SBA)
---
eureka:
hostname: localhost
port: 8761
client:
register-with-eureka: false
# Registry must be fetched so that Spring Boot Admin knows that there are registered applications
fetch-registry: true
serviceUrl:
defaultZone: http://${eureka.hostname}:${eureka.port}/eureka/
instance:
lease-renewal-interval-in-seconds: 10
lease-expiration-duration-in-seconds: 30
environment: eureka-test-${user.name}
server:
enable-self-preservation: false # Intentionally disabled for non-production
spring:
application:
name: eureka-server
boot:
admin:
client:
prefer-ip: true
# Since we are running in Eureka, "/" is already the root path for Eureka itself
# Register SBA under the "/admin" path
context-path: /admin
cloud:
config:
enabled: false
main:
allow-bean-definition-overriding: true
keycloak:
realm: ${realm}
auth-server-url: ${auth_url}
# Client ID
resource: spring-boot-admin-automated
# Client secret used for service account grant
credentials:
secret: ${client_secret}
ssl-required: external
autodetect-bearer-only: true
use-resource-role-mappings: false
token-minimum-time-to-live: 90
principal-attribute: preferred_username
build.gradle
// Versioning / Spring parents poms
apply from: new File(project(':buildscripts').projectDir, '/dm-versions.gradle')
configurations {
all*.exclude module: 'spring-boot-starter-tomcat'
}
ext {
springBootAdminVersion = '2.3.1'
keycloakVersion = '11.0.2'
}
dependencies {
compileOnly 'org.projectlombok:lombok'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server'
implementation "de.codecentric:spring-boot-admin-starter-server:${springBootAdminVersion}"
implementation 'org.keycloak:keycloak-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-security'
compile "org.keycloak:keycloak-admin-client:${keycloakVersion}"
testCompileOnly 'org.projectlombok:lombok'
}
dependencyManagement {
imports {
mavenBom "org.keycloak.bom:keycloak-adapter-bom:${keycloakVersion}"
}
}
The actual application code:
package com.app.eureka;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
#EnableAdminServer
#EnableEurekaServer
#SpringBootApplication
public class EurekaServer {
public static void main(String[] args) {
SpringApplication.run(EurekaServer.class, args);
}
}
Keycloak configuration:
package com.app.eureka.keycloak.config;
import de.codecentric.boot.admin.server.web.client.HttpHeadersProvider;
import org.keycloak.KeycloakPrincipal;
import org.keycloak.KeycloakSecurityContext;
import org.keycloak.OAuth2Constants;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.keycloak.adapters.springsecurity.token.KeycloakAuthenticationToken;
import org.keycloak.admin.client.Keycloak;
import org.keycloak.admin.client.KeycloakBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.context.annotation.ScopedProxyMode;
import org.springframework.http.HttpHeaders;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistry;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import java.security.Principal;
import java.util.Objects;
#KeycloakConfiguration
#EnableConfigurationProperties(KeycloakSpringBootProperties.class)
class KeycloakConfig extends KeycloakWebSecurityConfigurerAdapter {
private static final String X_API_KEY = System.getProperty("sba_api_key");
#Value("${keycloak.token-minimum-time-to-live:60}")
private int tokenMinimumTimeToLive;
/**
* {#link HttpHeadersProvider} used to populate the {#link HttpHeaders} for
* accessing the state of the disovered clients.
*
* #param keycloak
* #return
*/
#Bean
public HttpHeadersProvider keycloakBearerAuthHeaderProvider(final Keycloak keycloak) {
return provider -> {
String accessToken = keycloak.tokenManager().getAccessTokenString();
HttpHeaders headers = new HttpHeaders();
headers.add("X-Api-Key", X_API_KEY);
headers.add("X-Authorization-Token", "keycloak-bearer " + accessToken);
return headers;
};
}
/**
* The Keycloak Admin client that provides the service-account Access-Token
*
* #param props
* #return keycloakClient the prepared admin client
*/
#Bean
public Keycloak keycloak(KeycloakSpringBootProperties props) {
final String secretString = "secret";
Keycloak keycloakAdminClient = KeycloakBuilder.builder()
.serverUrl(props.getAuthServerUrl())
.realm(props.getRealm())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.clientId(props.getResource())
.clientSecret((String) props.getCredentials().get(secretString))
.build();
keycloakAdminClient.tokenManager().setMinTokenValidity(tokenMinimumTimeToLive);
return keycloakAdminClient;
}
/**
* Put the SBA UI behind a Keycloak-secured login page.
*
* #param http
*/
#Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/**/*.css", "/admin/img/**", "/admin/third-party/**").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().permitAll();
}
#Autowired
public void configureGlobal(final AuthenticationManagerBuilder auth) {
SimpleAuthorityMapper grantedAuthorityMapper = new SimpleAuthorityMapper();
grantedAuthorityMapper.setPrefix("ROLE_");
grantedAuthorityMapper.setConvertToUpperCase(true);
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(grantedAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
#Bean
#Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(buildSessionRegistry());
}
#Bean
protected SessionRegistry buildSessionRegistry() {
return new SessionRegistryImpl();
}
/**
* Allows to inject requests scoped wrapper for {#link KeycloakSecurityContext}.
* <p>
* Returns the {#link KeycloakSecurityContext} from the Spring
* {#link ServletRequestAttributes}'s {#link Principal}.
* <p>
* The principal must support retrieval of the KeycloakSecurityContext, so at
* this point, only {#link KeycloakPrincipal} values and
* {#link KeycloakAuthenticationToken} are supported.
*
* #return the current <code>KeycloakSecurityContext</code>
*/
#Bean
#Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public KeycloakSecurityContext provideKeycloakSecurityContext() {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
Principal principal = Objects.requireNonNull(attributes).getRequest().getUserPrincipal();
if (principal == null) {
return null;
}
if (principal instanceof KeycloakAuthenticationToken) {
principal = (Principal) ((KeycloakAuthenticationToken) principal).getPrincipal();
}
if (principal instanceof KeycloakPrincipal<?>) {
return ((KeycloakPrincipal<?>) principal).getKeycloakSecurityContext();
}
return null;
}
}
KeycloakConfigurationResolver
(separate class to prevent circular bean dependency that happens for some reason)
package com.app.eureka.keycloak.config;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class KeycloakConfigurationResolver {
/**
* Load Keycloak configuration from application.properties or application.yml
*
* #return
*/
#Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Logout controller
package com.app.eureka.keycloak.config;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import javax.servlet.http.HttpServletRequest;
#Controller
class LogoutController {
/**
* Logs the current user out, preventing access to the SBA UI
* #param request
* #return
* #throws Exception
*/
#PostMapping("/admin/logout")
public String logout(final HttpServletRequest request) throws Exception {
request.logout();
return "redirect:/admin";
}
}
I unfortunately do not have a docker-compose.yaml as our deployment is done mostly through Ansible, and anonymizing those scripts is rather difficult.
The services are ultimately created as follows (using docker service create):
(some of these networks may not be relevant as this is a local swarm running on my personal node, of note are the swarm networks)
dev#ws:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3ba4a65c319f bridge bridge local
21065811cbff docker_gwbridge bridge local
ti1ksbdxlouo services overlay swarm
c59778b105b5 host host local
379lzdi0ljp4 ingress overlay swarm
dd92d2f75a31 none null local
eureka-server Dockerfile:
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY eureka-server.jar /home/myuser/eureka-server.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom eureka-server.jar \
--server.port=8761; sh
Eureka/SBA app Docker swarm service:
dev#ws:~$ docker service create --name eureka-server -p 8080:8761 --replicas 1 --network services --hostname eureka-server --limit-cpu 1 --limit-memory 768m eureka-server
Client applications are then started as follows:
Dockerfile
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY client-api.jar /home/myuser/client-api.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom -Deureka.instance.hostname=client-api client-api.jar \
--eureka.zone=http://eureka-server:8761/eureka --server.port=0; sh
And then created as Swarm services as follows:
dev#ws:~$ docker service create --name client-api --replicas 1 --network services --hostname client-api --limit-cpu 1 --limit-memory 768m client-api
On the client side, of note are the following eureka.client settings:
eureka:
name: ${spring.application.name}
instance:
leaseRenewalIntervalInSeconds: 10
instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instanceId:${random.int}}
preferIpAddress: true
client:
registryFetchIntervalSeconds: 5
That's all I can think of right now. The created docker services are running in the same network and can ping each other by IP as well as by hostname (cannot show output right now as I am not actively working on this at the moment, unfortunately).
In the Eureka UI I can, in fact, see my client applications registered and running - it's only SBA which does not seem to notice that there are any.
I found nothing wrong with the configuration you have presented. The only weak lead I see is eureka.hostname=localhost from application.yml. localhost and loopback IPs are two things that better to be avoided with swarm. I think you should check if it isn't something network related.

Trying to update posgres database in docker resulting in machine refusing connection

While doing first database update i get an error
System.Net.Sockets.SocketException (10061): No connection could be made because the target machine actively refused it.
at Npgsql.NpgsqlConnector.Connect(NpgsqlTimeout timeout)
at Npgsql.NpgsqlConnector.RawOpen(NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnector.Open(NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnection.<>c__DisplayClass32_0.<g__OpenLong|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Npgsql.NpgsqlConnection.Open()
at Npgsql.EntityFrameworkCore.PostgreSQL.Storage.Internal.NpgsqlDatabaseCreator.Exists()
at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.Exists()
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration)
at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
No connection could be made because the target machine actively refused it.
I'm using Docker Toolbox for low end pc. And I ran postgres container with command:
docker run --name pg -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres;
For connecting EF Core to pgsql I used Npgsql nuget package and connectionString
"Server=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
Next, I created in MyDbContext file
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions<MyDbContext> options) : base(options) { }
}
public class MyDbContextFactory : IDesignTimeDbContextFactory<MyDbContext>
{
public MyDbContext CreateDbContext(string[] args)
{
var optionsBuilder = new DbContextOptionsBuilder<MyDbContext>();
optionsBuilder.UseNpgsql("Server=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password");
return new MyDbContext(optionsBuilder.Options);
}
}
After which i added dbContext in Startup class
public void ConfigureServices(IServiceCollection services, IConfiguration config)
{
services.AddDbContext<Models.MyDbContext>();
Can anyone help me, or suggest what could I have missed?
For your scenario, Server=127.0.0.1 will work when you start your .net core project from host.
I assume you start your .net core project from docker, if so, you need to follow steps belwo which connect the two container.
Change your connectionstring with postgres container name pg like
"Server=pg;Port=5432;Database=postgres;Username=postgres;Password=password"
Specify the --link=pg while running the .net core like
docker run -it -p 8585:80 --link=pg dockerpostgres

Selenium Grid - How to shutdown the node after execution?

I am trying to implement a solution that'd shutdown the node running inside a docker (Swarm) container after a test run.
I looked at docker remove command but cannot use the docker container rm command as the containers are at the service-task level
I looked at the /lifecycle-manager api but cannot get to the node from client, the docker stack is running through a nginx server and only one port(4444) gets exposed
Finally I looked at extended the grid node (DefaultRemoteProxy). Excuse my bad java code, this is my first stab at writing java code. With this, it looks like I can stop the node but it gets registered to the hub
How can i stop this re-registration process or start the node without it
My goal is to have a new container for every test and let the docker orchestration bring up a new container when the node is shutdown and container gets removed (docker api https://docs.docker.com/engine/api/v1.24/)
public class ExtendedProxy extends DefaultRemoteProxy implements TestSessionListener {
public ExtendedProxy(RegistrationRequest request, GridRegistry registry) {
super(request, registry);
}
#Override
public void afterCommand(TestSession session, HttpServletRequest request, HttpServletResponse response) {
RequestType type = SeleniumBasedRequest.createFromRequest(request, getRegistry()).extractRequestType();
if(type == STOP_SESSION) {
System.out.println("Going to Shutdown the Node");
GridRegistry registry = getRegistry();
registry.stop();
registry.removeIfPresent(this);
}
}
}
Hub
[DefaultGridRegistry.assignRequestToProxy] - Shutting down registry.
[DefaultGridRegistry.removeIfPresent] - Cleaning up stale test sessions on the unregistered node
[DefaultGridRegistry.add] - Registered a node
Node
[ActiveSessions$1.onStop] - Removing session de04928d-7056-4b39-8137-27e9a0413024 (org.openqa.selenium.firefox.GeckoDriverService)
[SelfRegisteringRemote.registerToHub] - Registering the node to the hub: http://localhost:4444/grid/register
[SelfRegisteringRemote.registerToHub] - The node is registered to the hub and ready to use
I figured out the solution. I am answering my own question, hoping it'd benefit the community.
Start the node with the command line flag. This stops the auto registration thread from ever getting created.
registerCycle - 0
And in your class that extends DefaultRemoteProxy, override the afterSession
#Override
public void afterSession(TestSession session) {
totalSessionsCompleted++;
GridRegistry gridRegistry = getRegistry();
for(TestSlot slot : getTestSlots()) {
gridRegistry.forceRelease(slot, SessionTerminationReason.PROXY_REREGISTRATION);
}
teardown();
gridRegistry.removeIfPresent(this);
}
When the client executed the driver.quit() method, the node de-registers with the hub.

Resources