Stomp broker with Redis as full featured message broker - spring-websocket

Is there a way to connect redis as a full featured stompbroker?
As per redis documentation, we can use redis as message broker. we are planning to use redis as a message broker for our chat product.
I am trying to connect to redis but its failing. Is there a way to connect reids message broker for stomp?
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/chat");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.setApplicationDestinationPrefixes("/app");
// registry.enableSimpleBroker("/topic");
registry.enableStompBrokerRelay("/topic").setRelayHost("localhost").setRelayPort(6379).setClientLogin("guest").setClientPasscode("guest");
}
}
I got this exception, when I tried.
io.netty.handler.codec.DecoderException: java.lang.IllegalArgumentException: No enum constant org.springframework.messaging.simp.stomp.StompCommand.-ERR unknown command CONNECT, with args beginning with:

You need STOMP compatible message broker. For example RabbitMQ with stomp plugin Spring directly pass every STOMP command to broker. There is no way to convert STOMP command to Redis Pub/Sub command.

Related

Spring Boot Admin with Eureka and Docker Swarm

EDIT/SOLUTION:
I've got it, partly thanks to #anemyte's comment. Although the eureka.hostname Property was not the issue at play (though it did warrant correction), looking more closely got me to the true cause of the problem: the network interface in use, port forwarding and (bad) luck.
The services that I chose for this prototypical implementation were those that have port-forwardings in a production setting (I must have unfortunately forgotten to add a port-forwarding to the example service below - dumb, though I do not know if this would have helped).
When a Docker Swarm service has a port forwarding, the container has an additional bridge interface in addition to the overlay interface which is used for internal container-to-container communication.
Unfortunately, the client services were choosing to register with Eureka with their bridge interface IP as the advertised IP instead of the internal swarm IP - possibly because that is what InetAddress.getLocalhost() (which is internally used by Spring Cloud) would return in that case.
This led me to erroneously believe that Spring Boot Admin could reach these services - as I externally could, when it in fact could not as the wrong IP was being advertised. Using cURL to verify this only compounded the confusion as I was using the overlay IP to check whether the services can communicate, which is not the one that was being registered with Eureka.
The (provisional) solution to the issue was setting the spring.cloud.inetutils.preferred-networks setting to 10.0, which is the default IP address pool (more specifically: 10.0.0.0/8) for the internal swarm networks (documentation here). There is also a blacklist approach using spring.cloud.inetutils.ignored-networks, however I prefer not to use it.
In this case, client applications advertised their actual swarm overlay IP to Eureka, and SBA was able to reach them.
I do find it a bit odd that I did not get any error messages from SBA, and will be opening an issue on their tracker. Perhaps I was simply doing something wrong.
(original question follows)
I have the following setup:
Service discovery using Eureka (with eureka.client.fetch-registry=true and eureka.instance.preferIpAddress=true)
Spring Boot Admin running in the same application as Eureka, with spring.boot.admin.context-path=/admin
Keycloak integration, such that:
SBA itself uses a service account to poll the various /actuator endpoints of my client applications.
The SBA UI itself is protected via a login page which expects an administrative login.
Locally, this setup works. When I start my eureka-server application together with client applications, I see the following correct behaviour:
Eureka running on e.g. localhost:8761
Client applications successfully registering with Eureka via IP registration (eureka.instance.preferIpAddress=true)
SBA running at e.g. localhost:8761/admin and discovering my services
localhost:8761/admin correctly redirects to my Keycloak login page, and login correctly provides a session for the SBA UI
SBA itself successfully polling the /actuator endpoints of any registered applications.
However, I have issues replicating this setup inside a Docker Swarm.
I have two Docker Services, let's say eureka-server and client-api - both are created using the same network and the containers can reach each other via this network (via e.g. curl). eureka-server correctly starts and client-api registers with Eureka right away.
Attempting to navigate to eureka_url/admin correctly shows the Keycloak login page and redirects back to the Spring Boot Admin UI after a successful login. However, no applications are registered and I cannot figure out why.
I've attempted to enable more debug/trace logging, but I see absolutely no logs; it's as if SBA is simply not fetching the Eureka registry.
Does anyone know of a way to troubleshoot this behaviour? Has anyone had this issue?
EDIT:
I'm not quite sure which settings may be pertinent to the issue, but here are some of my configuration files (as code snippets since they're not that small, I hope that's OK):
application.yaml
(Includes base eureka properties, SBA properties, and Keycloak properties for SBA)
---
eureka:
hostname: localhost
port: 8761
client:
register-with-eureka: false
# Registry must be fetched so that Spring Boot Admin knows that there are registered applications
fetch-registry: true
serviceUrl:
defaultZone: http://${eureka.hostname}:${eureka.port}/eureka/
instance:
lease-renewal-interval-in-seconds: 10
lease-expiration-duration-in-seconds: 30
environment: eureka-test-${user.name}
server:
enable-self-preservation: false # Intentionally disabled for non-production
spring:
application:
name: eureka-server
boot:
admin:
client:
prefer-ip: true
# Since we are running in Eureka, "/" is already the root path for Eureka itself
# Register SBA under the "/admin" path
context-path: /admin
cloud:
config:
enabled: false
main:
allow-bean-definition-overriding: true
keycloak:
realm: ${realm}
auth-server-url: ${auth_url}
# Client ID
resource: spring-boot-admin-automated
# Client secret used for service account grant
credentials:
secret: ${client_secret}
ssl-required: external
autodetect-bearer-only: true
use-resource-role-mappings: false
token-minimum-time-to-live: 90
principal-attribute: preferred_username
build.gradle
// Versioning / Spring parents poms
apply from: new File(project(':buildscripts').projectDir, '/dm-versions.gradle')
configurations {
all*.exclude module: 'spring-boot-starter-tomcat'
}
ext {
springBootAdminVersion = '2.3.1'
keycloakVersion = '11.0.2'
}
dependencies {
compileOnly 'org.projectlombok:lombok'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server'
implementation "de.codecentric:spring-boot-admin-starter-server:${springBootAdminVersion}"
implementation 'org.keycloak:keycloak-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-security'
compile "org.keycloak:keycloak-admin-client:${keycloakVersion}"
testCompileOnly 'org.projectlombok:lombok'
}
dependencyManagement {
imports {
mavenBom "org.keycloak.bom:keycloak-adapter-bom:${keycloakVersion}"
}
}
The actual application code:
package com.app.eureka;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
#EnableAdminServer
#EnableEurekaServer
#SpringBootApplication
public class EurekaServer {
public static void main(String[] args) {
SpringApplication.run(EurekaServer.class, args);
}
}
Keycloak configuration:
package com.app.eureka.keycloak.config;
import de.codecentric.boot.admin.server.web.client.HttpHeadersProvider;
import org.keycloak.KeycloakPrincipal;
import org.keycloak.KeycloakSecurityContext;
import org.keycloak.OAuth2Constants;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.keycloak.adapters.springsecurity.token.KeycloakAuthenticationToken;
import org.keycloak.admin.client.Keycloak;
import org.keycloak.admin.client.KeycloakBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.context.annotation.ScopedProxyMode;
import org.springframework.http.HttpHeaders;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistry;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import java.security.Principal;
import java.util.Objects;
#KeycloakConfiguration
#EnableConfigurationProperties(KeycloakSpringBootProperties.class)
class KeycloakConfig extends KeycloakWebSecurityConfigurerAdapter {
private static final String X_API_KEY = System.getProperty("sba_api_key");
#Value("${keycloak.token-minimum-time-to-live:60}")
private int tokenMinimumTimeToLive;
/**
* {#link HttpHeadersProvider} used to populate the {#link HttpHeaders} for
* accessing the state of the disovered clients.
*
* #param keycloak
* #return
*/
#Bean
public HttpHeadersProvider keycloakBearerAuthHeaderProvider(final Keycloak keycloak) {
return provider -> {
String accessToken = keycloak.tokenManager().getAccessTokenString();
HttpHeaders headers = new HttpHeaders();
headers.add("X-Api-Key", X_API_KEY);
headers.add("X-Authorization-Token", "keycloak-bearer " + accessToken);
return headers;
};
}
/**
* The Keycloak Admin client that provides the service-account Access-Token
*
* #param props
* #return keycloakClient the prepared admin client
*/
#Bean
public Keycloak keycloak(KeycloakSpringBootProperties props) {
final String secretString = "secret";
Keycloak keycloakAdminClient = KeycloakBuilder.builder()
.serverUrl(props.getAuthServerUrl())
.realm(props.getRealm())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.clientId(props.getResource())
.clientSecret((String) props.getCredentials().get(secretString))
.build();
keycloakAdminClient.tokenManager().setMinTokenValidity(tokenMinimumTimeToLive);
return keycloakAdminClient;
}
/**
* Put the SBA UI behind a Keycloak-secured login page.
*
* #param http
*/
#Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/**/*.css", "/admin/img/**", "/admin/third-party/**").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().permitAll();
}
#Autowired
public void configureGlobal(final AuthenticationManagerBuilder auth) {
SimpleAuthorityMapper grantedAuthorityMapper = new SimpleAuthorityMapper();
grantedAuthorityMapper.setPrefix("ROLE_");
grantedAuthorityMapper.setConvertToUpperCase(true);
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(grantedAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
#Bean
#Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(buildSessionRegistry());
}
#Bean
protected SessionRegistry buildSessionRegistry() {
return new SessionRegistryImpl();
}
/**
* Allows to inject requests scoped wrapper for {#link KeycloakSecurityContext}.
* <p>
* Returns the {#link KeycloakSecurityContext} from the Spring
* {#link ServletRequestAttributes}'s {#link Principal}.
* <p>
* The principal must support retrieval of the KeycloakSecurityContext, so at
* this point, only {#link KeycloakPrincipal} values and
* {#link KeycloakAuthenticationToken} are supported.
*
* #return the current <code>KeycloakSecurityContext</code>
*/
#Bean
#Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public KeycloakSecurityContext provideKeycloakSecurityContext() {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
Principal principal = Objects.requireNonNull(attributes).getRequest().getUserPrincipal();
if (principal == null) {
return null;
}
if (principal instanceof KeycloakAuthenticationToken) {
principal = (Principal) ((KeycloakAuthenticationToken) principal).getPrincipal();
}
if (principal instanceof KeycloakPrincipal<?>) {
return ((KeycloakPrincipal<?>) principal).getKeycloakSecurityContext();
}
return null;
}
}
KeycloakConfigurationResolver
(separate class to prevent circular bean dependency that happens for some reason)
package com.app.eureka.keycloak.config;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class KeycloakConfigurationResolver {
/**
* Load Keycloak configuration from application.properties or application.yml
*
* #return
*/
#Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Logout controller
package com.app.eureka.keycloak.config;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import javax.servlet.http.HttpServletRequest;
#Controller
class LogoutController {
/**
* Logs the current user out, preventing access to the SBA UI
* #param request
* #return
* #throws Exception
*/
#PostMapping("/admin/logout")
public String logout(final HttpServletRequest request) throws Exception {
request.logout();
return "redirect:/admin";
}
}
I unfortunately do not have a docker-compose.yaml as our deployment is done mostly through Ansible, and anonymizing those scripts is rather difficult.
The services are ultimately created as follows (using docker service create):
(some of these networks may not be relevant as this is a local swarm running on my personal node, of note are the swarm networks)
dev#ws:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3ba4a65c319f bridge bridge local
21065811cbff docker_gwbridge bridge local
ti1ksbdxlouo services overlay swarm
c59778b105b5 host host local
379lzdi0ljp4 ingress overlay swarm
dd92d2f75a31 none null local
eureka-server Dockerfile:
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY eureka-server.jar /home/myuser/eureka-server.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom eureka-server.jar \
--server.port=8761; sh
Eureka/SBA app Docker swarm service:
dev#ws:~$ docker service create --name eureka-server -p 8080:8761 --replicas 1 --network services --hostname eureka-server --limit-cpu 1 --limit-memory 768m eureka-server
Client applications are then started as follows:
Dockerfile
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY client-api.jar /home/myuser/client-api.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom -Deureka.instance.hostname=client-api client-api.jar \
--eureka.zone=http://eureka-server:8761/eureka --server.port=0; sh
And then created as Swarm services as follows:
dev#ws:~$ docker service create --name client-api --replicas 1 --network services --hostname client-api --limit-cpu 1 --limit-memory 768m client-api
On the client side, of note are the following eureka.client settings:
eureka:
name: ${spring.application.name}
instance:
leaseRenewalIntervalInSeconds: 10
instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instanceId:${random.int}}
preferIpAddress: true
client:
registryFetchIntervalSeconds: 5
That's all I can think of right now. The created docker services are running in the same network and can ping each other by IP as well as by hostname (cannot show output right now as I am not actively working on this at the moment, unfortunately).
In the Eureka UI I can, in fact, see my client applications registered and running - it's only SBA which does not seem to notice that there are any.
I found nothing wrong with the configuration you have presented. The only weak lead I see is eureka.hostname=localhost from application.yml. localhost and loopback IPs are two things that better to be avoided with swarm. I think you should check if it isn't something network related.

gRPC streaming procedure returning "Method is unimplemented." when running in Azure Container Instance

I have a gRPC service defined and implemented in dotnet core 3.1 using C#. I have a stream call defined like so:
service MyService {
rpc MyStreamingProcedure(Point) returns (stream ResponseValue);
}
In the service it is generated
public virtual global::System.Threading.Tasks.Task MyStreamingProcedure(global::MyService.gRPC.Point request, grpc::IServerStreamWriter<global::MyService.gRPC.ResponseValue> responseStream, grpc::ServerCallContext context)
{
throw new grpc::RpcException(new grpc::Status(grpc::StatusCode.Unimplemented, ""));
}
In my service it is implemented by overriding this:
public override async Task MyStreamingProcedure(Point request, IServerStreamWriter<ResponseValue> responseStream, ServerCallContext context)
{
/* magic here */
}
I have this building in a docker container, and when I run it on localhost it runs perfectly:
docker run -it -p 8001:8001 mycontainerregistry.azurecr.io/myservice.grpc:latest
Now here is the question. When I run this in an Azure Container Instance and call the client using a public IP address, the call fails with
Unhandled exception. Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="Method is unimplemented.")
at Grpc.Net.Client.Internal.HttpContentClientStreamReader`2.MoveNextCore(CancellationToken cancellationToken)
It appears that it is not seeing the override and is running the procedure in the base class. The unary call on the same gRPC service works fine using the container running in public ACI. Why would the streaming call behave differently on localhost and running over a public IP address?
I got the same error, because I had not registered service.
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<MyService>();
});

Selenium Grid - How to shutdown the node after execution?

I am trying to implement a solution that'd shutdown the node running inside a docker (Swarm) container after a test run.
I looked at docker remove command but cannot use the docker container rm command as the containers are at the service-task level
I looked at the /lifecycle-manager api but cannot get to the node from client, the docker stack is running through a nginx server and only one port(4444) gets exposed
Finally I looked at extended the grid node (DefaultRemoteProxy). Excuse my bad java code, this is my first stab at writing java code. With this, it looks like I can stop the node but it gets registered to the hub
How can i stop this re-registration process or start the node without it
My goal is to have a new container for every test and let the docker orchestration bring up a new container when the node is shutdown and container gets removed (docker api https://docs.docker.com/engine/api/v1.24/)
public class ExtendedProxy extends DefaultRemoteProxy implements TestSessionListener {
public ExtendedProxy(RegistrationRequest request, GridRegistry registry) {
super(request, registry);
}
#Override
public void afterCommand(TestSession session, HttpServletRequest request, HttpServletResponse response) {
RequestType type = SeleniumBasedRequest.createFromRequest(request, getRegistry()).extractRequestType();
if(type == STOP_SESSION) {
System.out.println("Going to Shutdown the Node");
GridRegistry registry = getRegistry();
registry.stop();
registry.removeIfPresent(this);
}
}
}
Hub
[DefaultGridRegistry.assignRequestToProxy] - Shutting down registry.
[DefaultGridRegistry.removeIfPresent] - Cleaning up stale test sessions on the unregistered node
[DefaultGridRegistry.add] - Registered a node
Node
[ActiveSessions$1.onStop] - Removing session de04928d-7056-4b39-8137-27e9a0413024 (org.openqa.selenium.firefox.GeckoDriverService)
[SelfRegisteringRemote.registerToHub] - Registering the node to the hub: http://localhost:4444/grid/register
[SelfRegisteringRemote.registerToHub] - The node is registered to the hub and ready to use
I figured out the solution. I am answering my own question, hoping it'd benefit the community.
Start the node with the command line flag. This stops the auto registration thread from ever getting created.
registerCycle - 0
And in your class that extends DefaultRemoteProxy, override the afterSession
#Override
public void afterSession(TestSession session) {
totalSessionsCompleted++;
GridRegistry gridRegistry = getRegistry();
for(TestSlot slot : getTestSlots()) {
gridRegistry.forceRelease(slot, SessionTerminationReason.PROXY_REREGISTRATION);
}
teardown();
gridRegistry.removeIfPresent(this);
}
When the client executed the driver.quit() method, the node de-registers with the hub.

scdf 1.7.3 docker k8s #Bean no running, no logs

As a user, writing a processor as a cloud function,
scdf 1.7.3, spring boot 1.5.9, spring-cloud-function-dependencies 1.0.2,
public class MyFunctionBootApp {
public static void main(String[] args) {
SpringApplication.run(MyFunctionBootApp.class,
"--spring.cloud.stream.function.definition=toUpperCase");
}
#Bean
public Function<String, String> toUpperCase() {
return s -> {
log.info("received:=" + s);
return ( (s+"jsa").toUpperCase());
};
}
}
i've create a simple stream => time | function-runner | log
function-runner-0.0.6.jar at nexus is ok
docker created ok,
Container entrypoint set to [java, -cp, /app/resources:/app/classes:/app/libs/*, function.runner.MyFunctionBootApp]
No time message from time pod arrived to function-runner processor executing toUpperCase function
No logs
I am checking deploying using , app.function-runner.spring.cloud.stream.function.definition=toUpperCase, #FunctionalScan
any clues?
We discussed function-runner being deprecated in favor of native support of Spring Cloud Function in Spring Cloud Stream. See: scdf-1-7-3-docker-k8s-function-runner-not-start. Please don't duplicate post it.
Also, you're on a very old Spring Boot version (v1.5.9 - at least 1.5yrs old). More importantly, Spring Boot 1.x is in maintenance-only mode, and it will be EOL by August 2019. See: spring-boot-1-x-eol-aug-1st-2019. It'd be good that you upgrade to 2.1.x latest.

SignalR + Redis in cluster not working

Background
When running the website with single application instance (container) - SignalR is working perfectly.
When scaling out to more instances (>1), it throws errors and is just not working. I looked for an explanation on the internet and found that I need to configure my Signalr to work in cluster. I choose Redis as my backplane.
I worked through many tutorials on how to do it right and it's just not working for me.
Environment
I working asp.net core v2.1 hosted in Google cloud. The application is deployed as Docker container, using Kestrel + nginx. The docker container runs in a Kubernetes cluster behind a load balancer.
My Configuration
Startup.cs:
public class Startup
{
public Startup(IConfiguration configuration,
IHostingEnvironment hostingEnvironment)
{
Configuration = configuration;
HostingEnvironment = hostingEnvironment;
}
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddSignalR().AddRedis(options =>
{
options.Configuration = new ConfigurationOptions
{
EndPoints =
{
{ ConfigurationManager.Configuration["settings:signalR:redis:host"], int.Parse(ConfigurationManager.Configuration["settings:signalR:redis:port"])}
},
KeepAlive = 180,
DefaultVersion = new Version(2, 8, 8),
Password = ConfigurationManager.Configuration["settings:signalR:redis:password"]
};
});
// ...
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, IServiceProvider serviceProvider)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/errors/general");
app.UseHsts();
}
// nginx forward
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
app.UseSignalR(routes =>
{
routes.MapHub<StatisticsHub>("/hubs/myhub");
});
}
}
In order to verify that the connection to Redis server succeeded, I checking the output window of the Kastrel:
Same behaviour (connected) found also on servers (2 replicas, not on development enviroment).
In order to verify that the Signalr is "really" using the Redis (not just connecting), I used redis-cli to connect the Redis server and found that:
From this I can understand that there are some "talks" on Redis.
I removed the website LoadBalancer (GCP) and deployed it agian. Now with Sticky-Session: ClientIP. This loadbalancer is routing requests to the different containers.
The only place that I didnt changed is the nginx configuration. I'm wrong?
The result
SignalR not working on the browser. These errors are from the browser console:
scripts.min.js:1 WebSocket connection to 'wss://site.com/hubs/myhub?id=VNxKLazEKr9FKM4GPZRDhA' failed: Error during WebSocket handshake: Unexpected response code: 404
scripts.min.js:1 Error: Failed to start the transport 'WebSockets': undefined
Error: Connection disconnected with error 'Error: Server returned handshake error: Handshake was canceled.'.
scripts.min.js:1 Error: Connection disconnected with error 'Error: Server returned handshake error: Handshake was canceled.'.
files.min.js:1 Uncaught (in promise) Error: Error connecting to signalR. Error= Error
...
Question
What I missed? What to check?

Resources