Does Jhipster support sticky sessions with docker stack or HAproxy? - spring-security

Running a sample jhipster app (found at : https://github.com/ehcache/ehcache3-samples/tree/master/fullstack) , when I deployed it to a docker swarm (swarm mode) with docker stack, it worked fine and I could log-in
But when I started "scaling" the web app, I found out the session was lost whenever my request would hit another container than the first one.
Actually, I even saw in the logs :
worker2 | org.springframework.security.web.authentication.rememberme.CookieTheftException: Invalid remember-me token (Series/token) mismatch. Implies previous cookie theft attack.
worker2 | at org.terracotta.demo.security.CustomPersistentRememberMeServices.getPersistentToken(CustomPersistentRememberMeServices.java:173)
worker2 | at org.terracotta.demo.security.CustomPersistentRememberMeServices.processAutoLoginCookie(CustomPersistentRememberMeServices.java:83)
worker2 | at org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices.autoLogin(AbstractRememberMeServices.java:130)
while I was trying to logging in again...
Is there something I need to setup to have the load balancer treat the session as unique ?

Related

Cannot connect to local MQTT server (running on Windows 10) from docker instance

RabbitMQ server is running locally on Windows 10 and docker is running on it also.
I'm running a device simulator on docker and it has to talk to local RabbitMQ server through MQTT.
It had been working but one day it stopped.
Here is device logging -
mqtt-client.cpp:322 | Failed to connect to broker at 'xxx#xxx.xxxxxx.com/:1883': code=15, message='Lookup error.'
Keep in mind that from docker(latest version) calls have been made to local web server which has exact domain name -
https-commissioning-channel.cpp:81 | [HttpsCommissioningChannel] using token to contact bootstrap service at 'https://xxx.xxxxxx.com/apibst/alo/v1/bootstrap/device-info'
So you can see domain name has been resolved. For firewall configuration port is open on 1883 (consider it had been working). RabbitMQ is running.
What might be the issue and what should I do to make the call go through?
As per the comments xxx#xxx.xxxxxx.com/:1883 should not contain a slash (xxx#xxx.xxxxxx.com:1883) - see the URI Scheme.

Container always reach same backend on replicated services

I'm deploying a 3 tier application using docker swarm, similar to:
--> BACK01-01 -- --> BACK02-01
| | |
FRONTEND-01 ----------------> BACK01-02 --------> BACK02-02
| | |
--> BACK01-03 -- --> BACK02-03
Frontend Back Service 01 Back Service 02
This is a 3 node swarm, where each *-01 service task is running on the manager-node, each *-02 service task is running on worker-node-01 and each *-03 service task is running on worker-node-02
All communication between services are using GRPC and creating a new connection per request.
All I want with this is to distribute the load over every replica.
Sequentially I made a request to frontend which make a request to back01, which make a request to back02. But after 50 requests, all inner requests where made to back01-03 and back02-03 and the other were never reached.
I using default service configuration and the stack was deployed using portainer GUI
Is there anything that I'm not understanding?
P.S: I had tested service load balance with a simple HTTP and GRPC server returning the container id, with 4 replicas in one node, and it was returning each one sequentially.

Spring Boot Admin - Running in Docker Swarm weirdly

I am running multiple Spring-Boot servers all connected to a Spring Boot Admin instance. Everything is running in the same Docker Swarm.
Spring Boot Admin keeps reporting on these "fake" instances that pop up and die. They are up for 1 second and then become unresponsive. When I clear them, they come back. The details for that instance show this error:
Fetching live health status failed. This is the last known information.
Request failed with status code 502
Here's a screenshot:
This is the same for all my APIs. This is causing us to get an inaccurate health reading of our services. How can I get Admin to stop reporting on these non-existant containers ?
I've looked in all my nodes and can't find any containers (running or stopped) that match the unresponsive containers that Admin is reporting.

Hyperledger Composer Identity Issue error after network restart (code:20, authorization failure)

I am using Docker Swarm and docker-compose to setup my Fabric (v1.1) and Composer (v0.19.18) networks.
I wanted to test how my Swarm/Fabric networks would respond to a host/ec2 failure, so I manually reboot the host which is running the fabric-ca, orderer, and peer0 containers.
Before the reboot, everything runs perfectly with respect to issuing identities. After the reboot, though all of the Fabric containers are restarted and appear to be functioning properly, I am unable to issue identities with the main admin#network card.
After reboot, composer network ping -c admin#network returns successfully, but composer identity issue (via CLI or javascript) both return code 20 errors as described here:
"fabric-ca request register failed with errors [[{\"code\":20,\"message\":\"Authorization failure\"}]]"
I am guessing the issue is stemming from the host reboot and some difference in how it is restarting the Fabric containers. I can post my docker-compose files if necessary.
If your fabric-ca-server has restarted and it's registration database hasn't been persisted (for example the database is stored on the file system of the container and loss of that container means loss of the contents of that file system) then the ca-server will create a completely new bootstrap identity called admin for issuing identities and it won't be the one you have already have and therefore isn't a valid identity anymore for the fabric-ca-server. Note that it will be a valid identity for the fabric network still. So this is why you now get authorisation failure from the fabric-ca-server. The identity called admin that you currently have is not known to your fabric-ca-server anymore.

Gateway app can not connect to microservices

we are using Jhipster and docker for our microservices architechture. we just deployed our application stack to docker swarm(docker-compose version 3) with one only one node as active and having issues with Gateway app throwing zuul timeout connecting to backend microservices. We have a different environment where we are not using swarm(docker-compose version 2) and it works great. In swarm I was able curl to backend microservices from Gateway app using containername:port but not containerIp:port. I am lost here as I could not narrow down the issue to whether it is a swarm issue or jhipster issue. I even changed the 'prefer-ip-address: false' in our app properties but it is same issue? Any leads on what the issue could be?

Resources