Redis flushall command is randomly being called - ruby-on-rails

I have a ruby app in production that uses sidekiq (that uses redis) and I have managed to discover that flushall commands are being called which cause the database to be wiped (thus removing all the processed and scheduled jobs).
I don't know or understand what could be causing this.
Does anyone know how I can begin to trace the call to flushall?
Thanks,

It is most likely that your Redis server is open to the public network without any protection - that is just calling for trouble because anyone can connect and do much more damage than just a FLUSHALL. If that it the case, use password authentication at the very least, after burning the compromised server - the attacker may have gained access to your server's operating system and from there who knows where. More information at: http://antirez.com/news/96
If that isn't the case and you have a rogue application somewhere that randomly calls unwanted commands, you can try tracking it by combining the MONITOR and CLIENT LIST.
Lastly, you can consider renaming/disabling the FLUSHALL command, at least temporarily, until you get to the bottom of this.

Related

How can I ensure my Topshelf service stops before SQL Server on a computer restart?

I have a windows service running using Topshelf. This service makes a lot of SQL server queries. When the hosting computer is restarted it almost always causes errors in my service due to SQL Server stopping in the middle of my service making a query. I've been asked to solve this so the logs won't have so many errors as these computers are restarted frequently.
Topshelf has some built-in WhenShutdown logic that you can use to run when the computer is shutdown/restarted, but there is still no guarantee that my service will stop before SQL Server, and based on the error frequency it pretty much always happens that way. I have tried to also use Topshelfs WhenCustomCommandReceived to listen for windows PreShutdown event as shown here, but my tests when logging any custom command received and then rebooting my computer shows no logs. I also tried adding SQL Server as a dependency to my service, but this still doesn't guarantee mine will stop before SQL Server.
I have also tried adding in the logic from this solution, but again I never see any logs indicating this code is even being executed on a restart. Any tips on how I can better solve this issue?
tldr: how to ensure my topshelf service stops before SQL server on a computer restart/shutdown
Thanks!

Redis went down and took our app with it. How to mitigate this in the future?

Today our Heroku hosted app had a meltdown which meant our company couldn't access or use any of our internal tools. This was worrisome to say the least.
After much debugging, it turned out to be Redis that had crashed due to an AWS server crash in the UK... fun times. We only use Redis for actioncable so it seemed really odd that a simple crash of one connected tool would take down an entire app.
Has anyone had an issue like this before and know of a way to mitigate this in the future?
Welcome to the club :)
There are some strategies to prevent such situation in the future:
in general you need to implement Circuit Breaking pattern over your redis-client.
Implementation depends on your application architecture, and it's hard to say use this - it will match yours design at 100%.
In a nutshell: you describe rules, which are controlling when all trafic to Redis is not issued at all. Like: 10 connect fails during 30 seconds.
When this happens traffic will not be opened, but checks will happen after a while to see if Redis is back or not, and as soon as back - traffic will be sent to the redis.
Use Redis Sentinel or Redis Cluster (both are required to be supported by your redis client). In that case you've some failover mode - read more on Redis site.

Service Worker file and an offline mode

Do I understand correctly that a server worker file in a PWA should not be cached by a PWA? As I understand, once registered, installed and activated, it exits as an entity separate from a page in a browser environment and gets reloaded by the browser once a new version is found (I am omitting details that are not important here). So I see no reason to cache a service worker file. Browser kind of caching it by storing it in memory (or somewhere). I think caching a service worker file will complicate discovery of its code update and will bring no benefits.
However, if a service worker is not cached, there will be an error trying to retrieve it while refreshing a page that registers it in an offline mode because the service worker file is not available when the network is down.
What's the best way to eliminate this error? Or should I cache a service worker file? What's the most efficient strategy here?
I was doing some reading on PWA but found no clear explanation of the matter. Please help me with your advice if possible.
You're correct. Never cache service-worker.js.
To avoid the error that comes from trying to register without connectivity simply check the connection state from window.navigator.onLine and skip calling register if offline.
You can listen for network state changes and call registration at a later point in time if you want.

How do you ensure your Rails server running

What is common approach to make sure that Rails server is auto-restarted after a serious crash, or a process kill? How do you deal with hanging processes? I have nginx and thin running on my production server - would you suggest to put something in between them? Or using another server?
Firstly:
You should identify the cause of a process hang or kill. These are not normal behaviours and indicate a fault somewhere.
Look for:
Insufficient memory or high load before a crash - indicates a configuration problem.
Versions of nginx that are too new.
If you're virtualising, this can cause a number of subtle problems with linux kernels that may cause segfaults. If you're using EC2, use Amazon Linux for your best chance. Ubuntu server is too bleeding edge for this purpose.
In order to do the restarts, I suggest you use monit as this is quick, easy and reliable - it's the normal way to do this.
Lastly, I suggest you set up external monitoring as well using something like Pingdom, as even monit won't catch every type fault, such as hardware failures.
If you only want to monitor an application, I'm always using Nagios with Centreon. You can set email alarming when your rails server is down. You have to setup your NRPE on every machine you want to monitor.
When an error is detected you can run a bash file to kill hanging processes and restart the server automatically. Personally, I never use that because a crash mean something goes wrong. So I do it manually in order to check everything.
Try to look here : http://www.centreon.com/

Rescue rails app from server failure

I have a rails app which is now hosted on dedicated server. Today something happened: app doesn't respond and I have no ssh access, restarting doesn't help and I am waiting for tech support to respond. But this is not a question, I just need this app to be online even if server fails. Which is the easiest option? Can I create second server on different hosting and serve from there in case of failure, if so, how to sync db and files? Application is not heavily loaded, I just need it to be available.
Difficult problem to solve. There's no one proven way to make this happen, but in general you need "No Single Point of Failure"
There's an entire science devoted to reliability in web applications -- no way can you get that answered in a SO question.
You can take frequent backups of your database, store them on S3 (and/or somewhere else). You can then
have an image of your applications server at your host
spin it up when your server dies
restore the database
Have the new application server take over responsibility (easiest way: assume the old server's IP address)

Resources