php system design for rollback on exceptions - rollback

I have a system which upon request does things such as
extracts zip, creates directories, inserts database information
It could fail for whatever reason at any stage, maybe permissions, bad file format, database error.
I don't want the system to have partial executions due to any exceptions.
How exactly would I implement a rollback system ?
What I'm thinking is for every action push into a stack or database a string function execution of the opposite action and for any failure pop it and do an eval on it.
Any other built in way or any tips before I start this?

I run this some situation, best I could to figure is to make verfications during the process(extract zip code, test database connection, verfier the user name avaialble, and so on), in the end of process I apply all the nessery functions(submit data, register user ...).

Related

How to handle skipWaiting and lazyloaded resources

So this is something I've been racking my brain about a bit, consider the following scenario:
I'm working on my project, I build it, and in my bundle is a lazyloaded module: module-a-[oldhash].js, that will get lazyloaded at some point in time.
Everything is fine and dandy.
I do some more work on my project, create a new bundle, deploy, and now my content hash has changed: module-a-[newhash].js. I deploy, go to my page, my service worker calls skipWaiting, but my page still tries to request module-a-[oldhash].js, which now no longer exists.
How do I go about this? The only way that I can think of handling this, is show an 'update available' message that posts a skipWaiting message to the service worker, and reloads the page on controllerchange event. But I'm curious if theres no way to achieve to same thing without having to include such a notification/toast pattern and a reload.
Additionally, its my understanding that this would only pose a problem with lazyloaded resources
Is my understanding of these problems correct? What are some common patterns for dealing with this?
Pretty much everything you describe there is correct. I'll just point out that this is a problem that extends beyond the use of a service worker. It can easily happen with long-lived single page apps that attempt to lazy-load a URL that has been replaced server-side with a new deployment.
There's some general information about the problem and potential solutions collected on at this Paying Attention while Loading Lazily site and associated video.
In general, the best practice is to:
Always assume that lazy-loading might fail (for whatever reason) and handle those failures gracefully. One approach might be to ask a user to reload the page upon encountering a failure.
Using a cache-first service worker can help protect against lazy-loading failures, at the expense of delaying updates until the newly installed service worker moves from waiting to active. As you mentioned, the best practice tends to be to show something in your UI letting a user know that updates are available, and once they opt-in to accepting those updates, postMessage() to the service worker telling it to call skipWaiting(). And finally, listening for the controllerchange event and calling window.location.reload() when that's fired.

Rails save log data to database

Is it possible to access to the information being saved into a rails log file without reading the log file. To be clear I do not want to send the log file as a batch process but rather every event that is written into the log file I want to also send as a background job to a separate database.
I have multiple apps running in docker containers and wish to save the log entries of each into a shared telemetry database running on the server. Currently the logs are formatted with lograge but I have not figured out how to access this information directly and send it to a background job to be processed.(as stated before I would like direct access to the data being written to the log and send that via a background job)
I am aware of the command Rails.logger.instance_variable_get(:#logger) however what I am looking for is the actual data being saved to the logs so I can ship it to a database.
The reasoning behind this is that there are multiple rails api's running in docker containers. I have an after action set up to run a background job that I hoped would send just the individual log entry but this is where I am stuck. Sizing isn't an issue as the data stored in this database to be purged every 2 weeks. This is moreso a tool for the in-house devs to track telemetry through a dashboard. I appreciate you taking the time to respond
You would probably have to go through your app code and manually save the output from the logger into a table/field in your database inline. Theoretically, any data that ends up in your log should be accessible from within your app.
Depending on what how much data you're planning on saving this may not be the best idea as it has the potential to grow your database extremely quickly (it's not uncommon for apps to create GBs worth of logs in a single day).
You could write a background job that opens the log files, searches for data, and saves it to your database, but the configuration required for that will depend largely on your hosting setup.
So I got a solution working and in fairness it wasn't as difficult as I had thought. As I was using the lograge gem for formatting the logs I created a custom formatter through the guide in this Link.
As I wanted the Son format I just copied this format but was able to put in the call for a background job at this point and also cleanse some data I did not want.
module Lograge
module Formatters
class SomeService < Lograge::Formatters::Json
def call(data)
data = data.delete_if do |k|
[:format, :view, :db].include? k
end
::JSON.dump(data)
# faktory job to ship data
LogSenderJob.perform_async(data)
super
end
end
end
end
This was just one solution to the problem that was made easier as I was able to get the data formatted via lograge but another solution was to create a custom logger and in there I could tell it to write to a database if necessary.

Preventing Rails from connecting to database during initialization

I am quite new at Ruby/Rails. I am building a service that make an API available to users and ends up with some files created in the local filesystem, without any need to connect to any database. Then, once every few hours, I want to run a piece of ruby code that takes these local files, uploads them to Amazon S3 and registers their location into a Postgres database.
Right now both codes live together in the same project. I am observing that every time a user does something the system connects to the database. I have seen this answer which recommends to eliminate all traces of ActiveRecord in my code, but given that I want to have my background bookkeeping process connect to the database I am stuck on what to do.
Is it possible to define two different profiles (one with database and one without) and specify which profile a certain function call should run on? would this work?
I'm a bit confused by this, the db does not magically connect to the database for kicks on every request, it does so because of a specific request requires it. Generally through ActiveRecord but not exclusively
If your system is connecting every time you make a request, then that implies you have some sort of user metric or authorisation based code in there. Just killing off the database will cause this to fail, and likely you'll have to find it anyways, to then get your system to work. I'd advise locating it.
Things to look for are before_filters in controllers, or database session management, for example, or look for what is in the logs - the query should appear - and that will tell you what is being loaded, modified or whatnot.
It might even work to stop your database, just before doing a user activity, and see where the error leads you. Rinse and repeat until the user activity works, without the database.

Saving 500/404 errors in Ruby on Rails to a database?

Is there a way to save the 500/404 etc errors to your database so you can check them to see if there are any bugs on the site?
I thought you could send an JS AJAX request from the 500.html page. e.g.
/errors/create/?message=error_message&ip=123&browser=ie9
But I'm not sure how to get that information when running in production mode?
Any help greatly appreciated,
Alex
This is what I have in my application controller:
def rescue_action_in_public(exception)
#This is called every time a non-local error is thrown.
#Copy the error to the db for later analysis.
Error.create :exception_name => exception.exception.to_s, :backtrace_info => exception.backtrace.to_s
#Then handle the error as usual:
super
end
As you can see I have an Error model I created and this saves a new one to the DB whenever it happens. One thing to remember is that backtraces are much to big for string columns so you will need something bigger like a text type. This works in my Rails 3.0.5 app.
Logging errors to the db is inadvisable since these errors can often be caused by database issues. It's safer to append your errors to a file (on a separate disk) if your site is high traffic, if the file system is unresponsive, then your db won't work anyway. Even safer would be to use an asynchronous message queue hosted on another server. In both cases, you can create reports by periodically parsing your log output.

How to run some tasks in the background and put the results to a global memory hash?

I'm building a website with Rails which can let user check if some domains have been registered. The logic is designed like this:
There is a text field on a page let users input some domain names
When user click check button, the input will be post to the server
server get the inputs, create a background task(which will be executed in another process), and return some text like "checking now..." to user immediately
The background task will post the domain names to another site, get the response, parse it to get useful information, then put data to a globle memory hash (the task takes 3 seconds)
The page contains "checking now..." has a javascript function, will request to server to get the check result. It runs every 2 seconds until it gets result.
There is a action in the server side to handle the "check-status request". It checks that Globle memory hash, it found the result, returns it, otherwise return "[]"
(I'm using Rails 2.3.8, delayed_job)
I was a Java developer, this is what I will do in Java. But I found it difficult to hold a "globle memory hash" for "delayed job worker" and "rails", because they are in different processes.
Is my design is impossible in rails? Must I store the checking result to database or something like "memcached"?
And, can these background tasks run in parallel?
It's not just between DelayedJob's workers and Rails' workers that you won't see the global variables, in production you will be dealing with multiple Rails worker processes as well. So even if you were to set a global variable in Rails, some other Rails worker won't see it.
This is by design. It has the advantage that you can spread the load of Rails and DelayedJob across multiple machines, because they only deal with the stateless request, and look at the database system or other persistent storage to add the statefulness that your web application needs.
From what I've gathered, a Java web application may use a threaded model that would allow you to perform background tasks and set global variables like you want to. But that also limits you to a single machine; what would you do if you had to scale up?
Memcached actually sounds like a really good solution in this case. It's a snap to install, requires very little set up, and is easy to use from within Rails too.

Resources