How do I handle Redis DB down scenario, while using Spring Session with Redis store for session management ? What if Redis DB is down and user tries to access his/her session ?
when redis down, you can store session in you container.
such as apache, Sticky sessions.
Do a filter extends DelegatingFilterProxy , add a switch to control session don't store in redis.
If Redis is down during startup of your application. Then your application will fail to start.
If Redis goes down once your application is up and running. Then spring tries reconnecting to Redis after specific intervals.
I guess what you're looking for has an answer here.
How to disable Redis Caching at run time if redis connection failed
Related
In my current project, there is a Redis Instance on Heroku and Sidekiq is configured to process jobs from that instance. We need to migrate the Redis instance to Azure and I wanted to use a configuration where one Worker connects to the Redis Instance on Heroku to process any queued jobs and the rest of the Workers connecting to the new instance on Azure to avoid any data loss.
I am new to Rails and Sidekiq. Please suggest a way for achieving the desired config
The docs say that you can run this command to get the credentials:
heroku redis:credentials
Then get the connection string with:
heroku config:get REDIS_URL -a example-app
Then you can use the connection string from REDIS_URL to connect externally for a bit.
I'm trying to access my redis database via Grafana Cloud on my laptop. The database is a redis container working as a cache on a different device (pi). Accessing the Redis database via Python script on my remote device is no problem but trying to connect to it via Grafana (using Redis Datasource Plugin) doesn't work as intended and throws a connection error. Poorly the documentation leaves me kinda clueless whats the specific cause (any missing plugin dependencies?) so I'm thankful for every hint.
To be able to access Redis Server from Grafana Cloud it should be exposed to the Internet as Jan mentioned.
If you run Grafana in Docker container it should be started in the host network mode (https://docs.docker.com/network/host/) to be able to access it from other devices.
If something is lacking or not clear in the Redis plugins documentation, please open an issue and we will update it: https://github.com/RedisGrafana/RedisGrafana/issues
I have a microservice that is taking in webhooks to process but it is currently getting pounded by the sender of said webhooks. Right now I am taking them and inserting the webhooks into the db for processing but the data is so bursty at times that I don't have enough bandwidth to manage the flood of requests and I cannot scale anymore as I'm out of db connections. The current thought is to just take the webhooks and throw them into a Kafka queue for processing; using Kafka I can scale up the number of frontend workers to whatever I need to handle the deluge of requests and I have the replayability of Kafka. By throwing the webhooks into Kafka, the frontend web server no longer needs a pool of db connections as it literally is just taking the request and throwing into the queue for processing. Does anyone have any knowledge on removing the db connectivity from Puma or have an alternative to do what's being asked?
Currently running
ruby 2.6.3
rails 6.0.1
puma 3.11
Ended up using Puma's before fork and on_worker_boot methods to not re-establish the database connection for those particular web workers within the config
I'm facing problem when I tried to use redis as session server for the below configuration:
more than one windows servers hosting same application with https://github.com/Azure/aspnet-redis-providers
Elastic load balancer with weighted routing redirects requests to the all IIS servers
Redis is hosted in AWS elastic-cache and accessible from both servers
Redis serves as a session server for 1 server at a time without any issue
for each session 3 keys are created
"{/_ktffpxxxxxxg2xixdnhe}_Write_Lock"
"{/_ktffpxxxxxxg2xixdnhe}_Data"
"{/_ktffpxxxxxxg2xixdnhe}_Internal"
Issue: When 1+ servers try to serve same user, by accessing session from redis at same instance, if server1 have placed _Write_Lock then the server2 fails to read+update-timeout OR write the data and after that it cleared the key
--> result, the user's next request to any server fails to verify his/her session.
Am i the only one who gets this issue ?? Please help ...
Note: With session stickiness enabled in ELB the signout is not intermittent, however that restricts us to take out a server for upgradation without loosing all user sessions for that server.
We're running a rails app via heroku that connects to a windows azure VM, where I've set up a redis master/slave to act as a cache (slash quick reference data store). The problem is, we sporadically get redis timeouts. This is with a timeout of 10 seconds (which I know is more than it needs), and reestablishing redis connections on fork. And using hiredis as a driver.
Anyone have a clue why this might be happening? I know heroku and the azure vm are hosted on different coasts, so there's a bit of latency; could there be TCP request drops? I'm fairly out of ideas.