I would like to store my application queries in a file to take out queries from code.
The reason because I need to do something like this is because at my project there is a person that is who make all the necesary changes to queries and send us the queries, then we need to go inside the and look for the specific query and change it.
I was thinking to store all the queries in a plist or a localizable strings so this person can change directly in a single file every query that need to be changed.
But what I am not sure if this could have a performance issue.
Does anybody did something like that?
Related
I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.
I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.
You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.
(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!
I've looked all over for an answer, but it seems like I'm missing something obvious. I've made a rather complex Core Data app before, but the answer to this question has eluded me for the past few months.
Here's the problem:
1) I have about 20 entities in my Model.
2) Some of these entities have user-editable objects, others have pre-loaded data
3) I would like to know if it's possible to update the pre-loaded entities with each new app update.
I know I can do this the "manual" way by specifying each updated attribute, but this is way too cumbersome. I want to just update all the pre-loaded entities once the user opens an updated version of the app. I don't want to touch the user-data.
Thank you so much for your help!
You could have a version number field in your schema which you can use to associate a version number with each record. If it has a value, it's a preload. Then for the preload stuff just insert the new data when the app opens, and then ignore/delete the old. Seems simple enough.
UPDATE:
The other alternative I believe is to separate your preloaded data into an entirely different data store. I have an app wherein I do this by delivering my preloaded data via a custom SQLite file, and user data in a CoreData store. I can do this because my preloaded data is read-only, which saves me from needing to copy the SQLite file into the documents directory. What this means is that, at every update, the new data file automatically overwrites the old by virtue of the app installation. The user's data is maintained as it should be.
Of course if your preloaded data is not read-only, then there's no way around the need to write code. In this case there's not much I can do for you, not having any more details about your problem.
I have a few data values that I need to store on my rails app and wanted to know if there are any alternatives to creating a database table just to do this simple task.
Background: I'm writing some analytics and dashboard tools for my ruby on rails app and i'm hoping to speed up the dashboard by caching results that will never change. Right now I pull all users for the last 30 days, and re-arrange them so I can see the number of new users per day. It works great but takes quite a long time, in reality I should only need to calculate the most recent day and just store the rest of the array somewhere else.
Where is the best way to store this array?
Creating a database table seems a bit overkill, and I'm not sure that global variables are the correct answer. Is there a best practice for persisting data like this?
If anyone has done anything like this before let me know what you did and how it turned out.
Ruby has a built-in Hash-based key value store named PStore. This provides simple file based, transactional persistance.
PStore documentation
If you've got a database already, it's really not a big deal to create a separate table for tracking this sort of thing. When doing reporting, it's often to your advantage to create derivative summary tables exactly like what you're describing. You can update these as required using a simple SQL statement and there's no worry that your temporary store will somehow go away.
That being said, the type of report you're trying to generate is actually something that can be done in real-time except on extravagantly large data sets. The key is to have indexes that describe the exact grouping operation you're trying to do. For instance, if you're grouping by calendar date, you can create a "date" field and sync it to the "created_at" time as required. An index on this date field will make doing a GROUP BY created_date very quick:
SELECT created_date AS on_date, COUNT(id) AS new_users FROM users GROUP BY created_date
Using a lightweight database like sqlite shouldn't feel like an overkill. Alternatively, you can use key-store solutions like tokyo cabinet or even store the array in a flat file manually but I really don't see any overkill in using sqlite.
I'm looking for some ideas about saving a snapshot of some (different) records at the time of an event, for example user getting a document from my application, so that this document can be regenerated later. What strategies do you reccomend? Should I use the same table as the table with current values or use a historical table? Do you know of any plugins that could help me with the task? Please share your thoughts and solutions.
There are several plugins for this.
Acts_audited
acts as audited creates a single table for all of the auditable objects and requires no changes to your existing tables. You get on entry per change with the changes stored as a hash in a memo field, the type of change (CRUD). It is very easy to setup with just a single statement in the application controller specifying which models you want audited.
Rolling back is up to you but the information is there. Because the information stored is just the changes building the whole object may be difficult due to subsequent changes.
Acts_as_versioned
A bit more complicated to setup- you need a separate table for each object you want to version and you have to add a version id to your existing table. Rollback is a very easy. There are forks on github that provide a hash of changes since last version so you can easily highlight the differences (it's what I use). My guess is that this is the most popular solution.
Ones I have no experience with: acts_as_revisable. I'll probably give this one a go next time I need versioning as it looks much more sophisticated.
I did this once awhile back. We created a new table that had a very similar structure to the table we wanted to log and whenever we needed to log something, we did something similar to this:
attr = object_to_log.attributes
# Remove things like created_at, updated_at, other unneeded columns
log = MyLogger.new(attrs)
log.save
There's a very good chance there are plugins/gems to do stuff like this, though.
I have used acts_as_versioned for stuff like this.
The OP is a year old but thought I'd add vestal_versions to the mix. It uses a single table to track serialized hashes of each version. By traversing the record of changes, the models can be reverted to any point in time.
Seems to be the community favorite as of this post...
I have a number of documents (mainly Word and Excel) that I'd like to make available to users of my Rails app. However, I've never tried something like this before and was wondering what the best way to do this was? Seeing as there will only be a small number of Word documents, and all will be uploaded by me, do I just store them somewhere in my Rails app (i.e. public/docs or similar) or should I set up a separate FTP and link to that? Perhaps there's an even better way of doing this?
If they're to be publically accessable, you definitely just want to stick them in public somewhere. Write a little helper to generate the URL for you based on however you want to refer to them in your app, for cleanliness (and so if you do change the URL later, for example to bucket your files to keep your directory sizes under control, you don't have to change links all over your app, just in one place.
If, on the other hand, your files are only for logged-in users, you'll need to use something like send_file to do the job, or one of the webserver-specific methods like the X-Sendfile header to check the user is authorised to view the file before sending it back to them.
I would do as you suggested and put them in public/docs. If you are planning on making an overview/index page for the files and link directly to them it would be easier if they were stored locally instead of a remote FTP server. However, since you are the one who will be uploading and maintaining these files, I think you should go with the option that's easiest for you.