Best BaaS for iOS poll app - ios

I am making iOS poll App, each device is able to make one vote per month.
When vote is made on iOS app, following is send to BaaS (UDID, vote, local_timestemp)
Then BaaS need to check that this UDID has not voted this month already:
if it has voted, than return "NO" to iOS device
if it has not voted in this month:
then return local_timestemp to iOS device
save (UDID, vote, local_timestemp) to DB
There is also view controller for showing current votes from all devices.
This will be polled every N seconds/minutes, so that new votes are updated.
This should return number of votes for each option, like is there are two options than return {1: 500, 2: 340}, this mean option one have 500 votes and option two have 340 votes.
I am also have question regarding how to get current votes from all devices ?
Is it better to compute number of votes for each option every time when they are requested.
Or should I use counter when new vote is updated, here I see problem of synchronization.
Or justy to update counter every N second/minutes ?
I am looking for BaaS that provide service for these features.
I have not preference for any BaaS provider.
But BaaS need to provide ability to run custom bushes logic, for this poll app to work.

Parse.com is one of the best to help you in your case.
You can also run Cloud Code, and Jobs each N minutes.
Their documentation is pretty much straightforward and simple. And it's for free! https://parse.com/docs/ios_guide#top/iOS

Unfortunately, Parse has shut down. Why don't you try out Hasura instead. It should prove to fit your needs. Check out compare to see how Hasura compares with other providers and also, explore to check out everything that is possible with Hasura (you will end up with a blog web app as well as a todo web app deployed live in under 15 mins).
Hasura has a lot of advantages over other providers as it lets you own your own data and infrastructure, you can also write your custom code in the language of your own choosing.

Related

how to implement a simple feature flag systems?

In our company, we want to develop a simple feature flag system to work beside our API Gateway. Since we expect not more than 100 req/s on the system we decided that third-party solutions will probably be costly(as they require time and effort to understand) and also it would be a fun effort to do it on our own. Here are the requirements:
We have the list of users and the latest app version they are using. We want to apply three kinds of policies on their usage of the some application feature:
Everybody can use the feature
Only users with a version higher than a given version can use the feature
Some random percentage of users can use the feature( this is for A/B testing in our future plans)
These policies are dynamic and can change (maybe we want to remove the policy or edit it in the future)
So, the first problem is how to store the policies ?
I think We can describe each policy with at most three fields: feature , type , and specification . So for instance something like A 2 1.5 means that feature A is only active for users who use version 1.5 or above. A 3 10 means activate feature A for 10 percent of users at random and finally A 1 means that feature A should be active for all uses.
Although in this way we can describe a policy in a meaningful matter, I can't see how I can use these in a system. For instance, how can I write these policies in an RDBMS or in Redis for instance?
The second problem is how to apply these policies in the real world. In our architecture, we want each request to first visit this service, and then(if authorized) it can proceed to the core services. Ideally, we want to call an endpoint of this service and the service returns a simple yes/no response(boolean). For simplicity lets assume we get a user id and a feature like 146 A (users 146 wants to use feature A), we want to determine whether 146 is allowed to use it or not.
I thought of two approaches, first is real-time processing: as a request comes, we do a database call to fetch enough data to determine the result. The second approach is to precompute these conditions before hand, for instance when a new policy is added we can apply that policy on all user records and store the results somewhere, in this way, when a request comes, we can simply get the list of all features that the uses has access to and check if the requested feature is in the list.
The first approach requires a lot of DB calls and computation per each request, and the second one is probably much more complicated to implement and needs to query all users when a new policy is added.
These are our two main problems, I tried to simplify things so that it could become a more generic problem. I would appreciate if you could share your thoughts on each of one them.

Shopify API Customers orders_count Rails

I'm in the process of making an app that will update shipping depending on what your order_count is using ShopifyAPI::Customer.
One main thing I'm trying to accomplish is being able to reset the order_count to zero for all customers.
I have been trying:
order = ShopifyAPI::Customer.all
order.update_all(:orders_count, "0")
It works when I'm addressing a single customer, but not Customer.all. Is there a way I can work around this to update all customers in the db at once?
Can you actually change a valid orders_count? It would be pretty crazy in my mind if Shopify allowed that.
If I were you, and I am not, for sure, I would simply change my algorithm so that it is more of a "How many orders has this customer placed since last tuesday?". That way, I reward them if their purchases exceed that threshold of X.
In your case, you are trying to erase their legitimate order history to make your counter work, which I doubt works.
In the meantime, if you can indeed set that count to zero, just loop through all the customers, and for each one, save their count as zero. There is no one shot all customers call in Shopify API, just like there is no one shot anything. Every resource is a one-off.

Set time to send out HTTP request

Is there a way in Rails to send out a request at a certain time?
I'm using an external credit card charging API, and I want to adjust each monthly subscription based on how many referrals they have (10% each, 10 referrals max). The API has a beta referral system built in, but it doesn't seem to work the way I need it to. Plus, there are just too many unknowns that I'd rather not get into at the moment. I just want to get it up and working, and since my system is fairly simple, I'd rather just do it manually.
There's a billing date for each subscription, and what I want to do is just manually adjust the price of the subscription based on how many active users there are containing the referral code of the user being charged. I'd like to just send out this request to the API just before they're billed. Like sometime around subscription.next_billing_at - 1.minute.
Then just set the subscription.price to price - (price * (User.where(referral_code: current_user_code)).count / 10).
I'm aware this is far from an optimal approach, considering the amount of extra requests being made each month, but since we're small right now, it shouldn't be a problem. Again, it's just a temporary solution so we can get things running now.
There are two options which directly answer your question.
Write a rake task and run it daily with cron via the Whenever gem. If you take this approach, you will have to have the task just load all subscriptions which are due to be billed in the next cycle and update them as required.
Alternatively, use something like Resque-scheduler, which would allow you to run some task at next_billing_at - 1.minute or something.
But if you are small, why not just update the price every time a new referral is created using a callback? Unless there are specific rate or query limits on this API, I doubt a card processor is going to be affected by the traffic you generate. Of course if there are other requirements, like, a referral only applies after a month or something like that, you are going to be stuck with one of the first 2 options, and the Cron + Rake task is probably the best solution in that case.

Multiple database records in a table vs database queries? What is the best for performance?

App running on: ruby 2.0, rails 4 and postgresql
1.The multiple tables story - How it works now:
A project has many users, as members.
Also project has many posts, when a post is created a notification is created for each project user.
Let's say if Project A has 100 users, we'll have 100 notifications in database, this will load the database with a lot of duplicates.
But a user can delete its own notification, can view it, we can update his notification with user specific data. And we'll use a rake task to remove the notifications that are older then a specific time interval.
2. The multiple db queries - What we want to do:
There is an idea of creating only one notification for an activity and use many to many relation with a table called notifications_users where we'll keep information about a notification if it was read by a current user, if this user removed this notification from his notifications tab, etc..
The downside of this I think it will be multiple db queries as when we'll need to find something about a notification and user we'll have to look up for the notification_users table to get the information needed.
Also, by building the relation this way it will be harder to clean up the database from old notifications as we will not know if this notification was read or not by some user.
Thank you.
The option (1.) seems pretty reasonable and if you can keep the notification model thin — like user_id, activity_id, timestamps, and maybe a couple more flags, then wouldn’t expect unreasonable performance penalties, since it looks like a pretty usual relational model and I’d expect the database to easily handle it.
If you keep you notification expiration tight, this means that the notification should not grow, or if it does, for example when user just abandoned the account, I would look for specific solutions for the issues, as the appear.
At Assembla.com we do notification with email and this is working pretty well for us, I’d say. So, maybe at some point, you may want to consider this option too.
It seems to me that the (2.) just doesn’t fulfil the business needs, and if you’re not considering them than it’s probably not worth considering this option either.
Before analyzing scenarios given in your question I would like to request you to clear one point which is a bit confusing in your question. The statements given below are confusing.
1) Also project has many posts, when a post is created a notification is created for each project user.
2) Let's say if Project A has 100 users, we'll have 100 notifications in database, this will load the database with a lot of duplicates.
Statement no. 1 describes that when a post is created; a notification is sent for each user. So suppose there are 100 users and 100 posts then the table which has notifications data will have 10000 rows (100*100).
Statement no. 2 describes that if there are 100 users then there would be 100 notifications in the database that means the notification table will have 100 rows.
Of the points given above which one should I consider.

Data sync between database and google calendar

I would like to sync my db (tasks on my db, that have a decription, a date, a start time and an end time, and a user) with Google calendar.
For sync with google i plan to use these components (of course I could somehow write the whole stuff on my own but this is something I can plan for the future now I am short of time, or in alternative can you suggest some working code that connects to google calendar to send/recieve data?).
Now my main problem is not really linked to Delphi programming anyway I must ask a Delphi related questions because other questions get unviewd (like this one i asked).
So I wonder how to do the sync. Note: I do one way sync and the generated calendar will be a read only calendar.
I can set a max number in the past and future to be synced (like 10 days in past and 100 in the future for example). Then the idea I have is this:
as I start the sync app I comletely read the google calendar itmes in the range, I compare one by one with what I have in db and then I "merge" changes. Then on timer I check for differences in my db and i upload changes.
But I am not sure that these is the best solution.
A simplification of the real case is this: imagine it is a CRM with some task assigend to every user. Since beyond every task there is a logic i want to managea that logic only in my application, but the idea of pulishing the calendar to google is that it is then easily available from any mobile device. This is way there is a one way sync. Ic ould also let the calendar not be readonly anyway at every sync I wil "download" the newly inserted tasks but I will ignore the deleted ones and the edited ones. In this second case it is not enough to track changes in db, but I shuold also track changes on google, at least to "intercept" the newly added tasks.
I am aware this is gerneic question but I would like to trigger an answer that can be useful, etiher redirecting me to a sync algorithm or to Delphi sample code or anything that can help me progress on this issue. Thanks.
Google: "calendar sync algorithms"
https://wiki.mozilla.org/Calendar:Syncing_Algorithm
http://today.java.net/pub/a/today/2007/01/16/synchronizing-web-client-database.html
Synchronisation algorithms
The last one actually is funny because it leads right back to StackOverflow ;) Point is: I think there is no need to reinvent the wheel. Ps: The first link contains some useful thoughts similar to yours.

Resources