Is there a way to schedule edits to firebase database? - ios

I am trying to create automated edits to the database in firebase. Is there a way to do that on the server-side? I am new to iOS development and swift so any help would be greatly appreciated.
Also, I've tried Zapier but the service is not specific enough for my needs.

Yes - Firebase has quite a flexible set of options for server-side updates and it is simple enough to schedule a cronjob to connect to firebase and perform some scheduled update or edits.
The most generic approach is to use the REST API to perform your updates although there are specific libraries to support Node and other platforms.
It is worth being aware of the recent major upgrade to version 3 of Firebase which introduced quite a few significant changes - it can be easy to confuse the older examples floating around with the new API so be aware of the differences as you put together your first proof of concept examples.
I assume that you are looking to run on your own server although another alternative is to use a container hosting environment ( Google Apps etc ).
If you have your own server and are looking to integrate I would suggest starting with:
https://firebase.google.com/docs/server/setup#prerequisites
Then perhaps a quick look at:
https://firebase.googleblog.com/docs/web/quickstart.html
and
https://www.firebase.com/docs/rest/
If you are just getting started I would suggest a first task being to authenticate, retrieve and update a Firebase record.
You can configure server auth keys through the FB console and use these as part of you authentication process.
If you are unfamiliar with JWT then it is worth spending a little time getting up to speed on this and working through the examples at https://www.firebase.com/docs/rest/guide/user-auth.html
Further to your comment:
So the first approach that comes to mind is to run some kind of scheduled job in your Cron which would connect using the REST API, perform some kind of query on the existing data to identify those records that require an update and remove or modify them.
Giving a little more though you could extend this approach without having to run at a recurring period less than the minimal anticipated deletion time you could run the scheduler just to clean up at some longer period but filter your results to the client so that you are not including stale data. This approach is discussed a little at Firebase chat - removing old messages
Getting the right solution to your particular scenario will depend a lot on how well you structure your data which can be counter-intuitive; particularly for users who have come from an RDBMS background.
There may be an inclination to keep the data slim and unpolluted with old irrelevant data however Firebase is quite good at managing large minimally structured data and the overhead of this bloat may not be as bad a thing as you may think.
If the filtering itself isn't sufficient and you don't have a server that you can CRON a cleanup process then you can implement a firebase worker process in Node or similar and have this running on a container service such as Heroku or Google Apps. See Firebase push notifications - node worker for some ideas on how to approach this.
When asked Google advised that they didn't advise on where best to host worker services but they did mention both Google App Engine and Heroku.
Another approach if you don't want to implement and host a watcher/worker process is to simply include some code in the client that checks for and removes stale data periodically.
The firebase Queue is very cool but may be a bit of an overkill for simply expiring stale data.

Related

Interval based API access and processing different DSL

Background
I'm currently working on a small Rails 5 project that needs to access and process an external API. There is a ruby wrapper gem available for the API, so accessing the data is not a problem.
Problem description
There are two parts of the equation that I am currently missing, and hoping someone out there can help me with.
1: I need to call the API, via Rails, every 15 minutes. How can I realize this? I was looking towards Active Job for this, but my research kind of stalled after getting no useful results.
2: The external API has different domain models and a different domain-specific language than my application. How can I map the different models without changes in Active Record?
1: I need to call the API, via Rails, every 15 minutes. How can I realize this? I was looking towards Active Job for this, but my research kind of stalled after getting no useful results.
The first problem you can solve using recurring tasks. The main idea is to run the process that will perform some operations every x minutes (or days or whatever fits your problem.
There are several tools that you can use. One of them is built-in the unix system and it is cron. You can read about it in system's manual. You can easily manage it using whenever gem. The main disadvantage is that you need an access to the system's cron which may be non-trivial on non-bare machines (for example Platform as a Service hosts such as Heroku).
You should also take a look at clockwork which does not rely on the system's cron. It uses approach where you have a separate process running all time and it keeps an eye on defined tasks.
In the second approach (having a separate process) you need to remember that time-consuming instructions may "lock" the process and postpone another tasks. In this case, you may want to use background processing such as sidekiq or delayed_job. The idea is to use one process for scheduling tasks at certain time and another process to process those tasks as soon as they appear in the queue.
2: The external API has different domain models and a different domain-specific language than my application. How can I map the different models without changes in Active Record?
You need to create a client that will consume the API and map its responses into models that you have in your application. This way, you don't need to make your model's scheme dependent on the API scheme. Take a look at resource_kit gem - this is a sample solution that uses this approach.
HI hdauven,
processing the API every 15 minutes will affect your server performance,so done it by using sidekiq, it is a background job and use sidetiq it will help you to perform the task every 15 min automatically
You are accessing API, Then why are you worrying about different domain.

Respond with large amount of objects through a Rails API

I currently have an API for one of my projects and a service that is responsible for generating export files as CSVs, archive and store them somewhere in the cloud.
Since my API is written in Rails and my service in plain Ruby, I use the Her gem in the service to interact with the API. But I find my current implementation less performant, since I do a Model.all in my service, which in turn triggers a request that may contain way too many objects in the response.
I am curious on how to improve this whole task. Here's what I've thought of:
implement pagination at API level and call Model.where(page: xxx) from my service;
generate the actual CSV at API level and send the CSV back to the service (this may be done sync or async).
If I were to use the first approach, how many objects should I retrieve per page? How big should a response be?
If I were to use the second approach, this would bring quite an overhead to the request (and I guess API requests shouldn't take that long) and I also wonder whether it's really the API's job to do this.
What approach should I follow? Or, is there something better that I'm missing?
You need to pass a lot of information through a ruby process, that's always not simple, I don't think you're missing anything here.
If you decide to generate CSVs at the API level then what do you get with maintaining the service? You could just ditch the service altogether because replacing your service with an nginx proxy would do the same thing better (if you're just streaming the response from API host)?
If you decide to paginate, there will be a performance reduction for sure, but nobody can tell you exactly how much you should paginate - bigger pages will be faster and consume more memory (reducing throughput by being able to run less workers), smaller pages will be slower and consume less memory but demand more workers because of IO wait times,
exact numbers will depend on the IO response times of your API app and the cloud and your infrastructure, I'm afraid no one can give you a simple answer you can follow without experimentation with a stress test, and once you set up a stress test, you will get a number of your own anyway - better than anybody's estimate.
A suggestion, write a bit more about your problem, constraints you are working under etc and maybe someone can help you with a bit more radical solution. For some reason I get the feeling that what you're really looking for is a background processor like sidekiq or delayed job, or maybe connect your service to the DB directly through a DB view if you are anxoius to decouple your apps, or an nginx proxy for API responses, or nothing at all... but I really can't tell without more information.
I think it really depends how you want do define 'performance' and what your goal for your API is. Do you want to make sure no request to your API takes longer than 20msec to respond, than adding pagination would be a reasonable approach. Especially if the CSV generation is just an edge case, and the API is really built for other services. The number of items per page would then be limited by the speed at which you can deliver them. Your service would not be particularly more performant (even less so), since it needs to call the service multiple times.
Creating an async call (maybe with a webhook as callback) would be worth adding to your API if you think it is a valid use case for services to dump the whole record set.
Having said that, I think strictly speaking it is the job of the API to be quick and responsive. So maybe try to figure out how caching can improve response times, so paging through all the records is reasonable. On the other hand it is the job of the service to be mindful of the amount of calls to the API, so maybe store old records locally and only poll for updates instead of dumping the whole set of records each time.

Will caching or background job improve my Rails App performance?

My Rails App syncs calendar events from gmail through the Nylas API. I am storing all the events and associated calendars on my app (either creating new or updating existing). It takes a very long time, in fact, I get timeout errors on my Heroku hosted Rails App whenever I try to sync a calendar. Not sure why it takes a very long time. So to react, I want to either start caching (using Redis or Memcached) the data (still don't know exactly how I will do that) OR run the sync in a background job (using Delayed_Job or Resque).
I wanted to know how others would tackle this problem. Would appreciate some feedback on not only what approach to take, but pointers in how would be appreciated as well.
If you need fast, persistent access within your app to a large set of calendar events that are ultimately sourced from an external system, then I'd create (in fact have already created) models for the calendars and the events. The structure ought to be fairly obvious from the structure of the API, and I would just persist them in your database so you can use ActiveRecord methods to retrieve/sort them.
It's unlikely that you'd need a caching layer on top of the model.
Synchronisation is definitely a background job.
You can use both but majorly background jobs, delayed_job or sidekiq.
You can also run a cron task that periodically update the calendar data in your app.
To fetch data you can fetch from memory store then database, where memcache will be useful.

Structuring backend queries

So this is more of a methodology question than a coding question. I want to ask this before I actually start coding in order to choose the best route. I have a messaging app. When the app launches I query in the background all the messages from the backend where current_user_id is equal to recipient_id. Now I have all of the messages stored the user needs to see so I locally store them into a sqlite database.
Great, but what about when the user gets new messages? How can i structure a query to receive those without having to query the entire table again? Also how do I set this up as a continual process? Is the phone always requesting update information from the backend while its in the foreground?
Thanks. I really appreciate your help. I'm currently using iOS and as stated SQLite. Also my backend is AWS node.js.
It looks like your goal is to ultimately synchronize data between two sources over a network with a constraint that the client is updated in a reasonable amount of time. You have a design choice to make between a push vs pull architecture.
Push architectures have the servers push data to clients when an event occurs.
Pull architectures have the device periodically poll the server for changes. This can be achieved through timed events.
There are hybrid approaches too.
Each have their advantages and disadvantages as some require constant polling. Others require constant connection based protocols which presents more scaling challenges.

Notify multiple Rails app when pgsql database changes

I have two Rails apps (one for web, and one for backend) accessing to the same PGSQL database. I would like to notify the other app if one app changes a table in the database.
How should I go about it?
I think this depends on:
How reliable you need it to be.
How fast you need the notifications to be delivered.
The FAYE solution suggested by #techvineet provides a good fast but unreliable option. (N.b. I don't mean it'll fail often, but it likely will occasionally, maybe 1/1000, If that causes you a problem, then avoid)
If you need something 100% reliable, and speed isn't important, you could write audit events to the database, and then poll that table from each app, if these are committed in the same transaction as the actual work is done, you should be safe... But it'll be as slow as your polling cycle.
Lastly, you if you want something fast AND reliable, then you could look at using something like ActiveMQ or RabbitMQ to give you reliable messaging between the applications to notify changes. You'll need a worker process in each app to listen to changes and deal with them appropriately.
My last comment would be that this 'smells' a little. The fact that you're trying to do this makes me think the architecture of your app might need looking at in the longer term. An obvious way of doing it might be to encapsulate all the business logic into an app which exposes an API, and then calling that API from both front and back end applications.
You can try using Faye http://faye.jcoglan.com/, which is a publish and subscribe messaging server. It can be integrated with Rails https://github.com/jamesotron/faye-rails.git. Messages can be transferred from one app to another by subscribing to the messages and publishing.
Hope this will help.

Resources