I am working on a SaaS system and while writing it, I was thinking about why I am duplicating the Stripe objects in my app? Why not just depend on Stripe to maintain all of those records. For example I have the models:
Customers
Subscriptions
Invoices
Charges
Payment Methods
Some of these (Customers, Subscriptions) are crucial to my app and must exist as they contain extra information about the object. Others, are just duplicates of the Stripe objects.
So my question is, in your architectures, why not just rely on API calls to Stripe to supply the app with data instead of depending on API calls and webhooks to maintain a consistent state between my app and Stripe?
I can think of a few ways to write this.
Create a model and store all fields in the DB. This is time consuming and very rigid (lots of webhooks to monitor and syncing to be done). On the flip side, almost all of the information I need is local and doesn't need an API call to work.
Create a model for all objects but only store a PK, a reference field to the corresponding Stripe object, and any "extra" data that is specific to my application. All data revolving Stripe would require an API call to retrieve.
Only create models for objects that perform logic in my app (Customer, Subscription) and load instances of children via API EG:
class Subscription
def invoices
Stripe::Invoices.list(subscription: self.stripe_id)
end
end
The big advantage to keeping some or all of this state on your end is that you don't need to make Stripe API requests to get the information, which makes fetching the information a lot faster.
Also, as your system and usage scales up, it won't be practical to make an API request to Stripe every time you need some of this info, as you'll start hitting rate limits.
Generally speaking it's best to strike a good balance between optimizing both the amount of data you store, the complexity of the logic you write to keep things in sync, and the number of API requests you make. The specific balance often depends on the unique needs of your particular system/tech stack/business/etc.
Related
Let's say I have 2 models in my app: User and Survey
I'm trying to plot the number of paid surveys over time. A paid survey is one that has been created by a user that has an active subscription. For simplicity, let's assume the User model has subscription_start_date and subscription_end_date.
So a survey becomes "paid" the moment it is created (provided the user has an active subscription) and loses its "paid" status when the subscription_end_date has passed. Essentially, the "paid survey" is really a state with a defined start and end date.
I can generate the data fine. What I'm curious about is what's the most recommended way of storing this kind of stats? What should that table look like basically.
Another thing I'm concerned about is whether there are any disadvantages of having a daily task that adds the data point for the past day.
For more context, this app is written in Rails and we're thinking of using this stat architecture for other models too.
If I am understanding you correctly, I do not think you need an additional model or daily task to generate data points. To generate your report you just need to come up with the right SQL/ActiveRecord query. When you aggregate the information, be careful not to introduce nested queries. For simplicity's sake we could pull all the information you need using:
surveys = Survey.all.includes(:user)
Based on your description, an instance of survey has a start date that is just created_at.to_date. And since Survey belongs_to :user, it's end date is user.subscription_end_date.
When plotting the information you may need to transform surveys into some data structure that groups the information by date. Alternatively you could probably achieve that with a more complex SQL statement.
You could of course introduce a new table that stores the data points by date to avoid a complex query or data aggregation via ruby. The downside of this is that you are storing redundant information and assume the burden of maintaining data integrity. That doesn't mean you shouldn't do it because there may be an upside in regards to performance and reporting convenience.
I would need more information about your project before saying exactly what I would do, but it sounds like you already have the information you need in your database and it's just a matter querying it properly.
I am creating a very basic online store that allows users to buy and customize certain products. I have gotten the payment system working using Stripe, and all that is left to do is provide the seller with a place to view the completed orders (which should contain Shipping Address, order configuration, etc).
I expect that this app will receive very, very low traffic (it's more for fun than anything), so I do not need a super robust admin system. I thought it would actually be sufficient to pass order information to stripe as metadata, and have the seller view the order information on stripe. However, a potential problem I see is that there might be more data than the metadata limit (20 key/value pairs, 500 val limits). Would it be better to create an admin system on my side (using webhooks to notify the application when the payment has been processed)? Thanks!
Stripe is really only meant to handle the payments part of the equation. The order part is normally handled on top of Stripe (either in your own system or some third party), with that system linking order ids to charge ids.
Having your own order admin page would normally make more sense in the Stripe model, since Stripe only stores the amount charged and not much more.
Also unless you're doing subscriptions, no need to wait for a webhook. The Create Charge API is synchronous so you'll know when the payment was processed instantly.
I have a rails application that has a subscription aspect with three plan levels depending on price tier. For example, 0-1000 messages is $10, 1001-10000 is $20, and a $0.01 surcharge on both for going over the quota amount.
Each User has many Messages. What's the best way (high level) to keep track of each user's message usage and overages and charge them accordingly?
I think you'll need these elements:
Way to track number of messages (caching)
Way to track payment system (how to calculate surcharges etc)
Scheduled process of charging
Messages
To track the sent messages, you need a caching system (calculating on the fly will be expensive). I don't have that much experience here, but I'd recommend looking at Redis (you may wish to research caching here)
I would use Redis to store a key/value pair for all the month's messages. So when a message is created in your DB, have a mechanism to add the update to a Redis hash (which will belong to a user ID)
Instagram info on Redis
Redis Hashes (store message date per username)
The Redis key/values will be to store the message timestamp (created_at) & the user_id of the message. This means you'll be able to reference the month's Redis store & dump to another db (to reference later on), allowing you to calculate how many messages each user sent
Payments
To enable a tier-based pricing structure, you'll need to be able to calculate the monthly invoices to send out. This should be an internal system, basically creating a mechanism to present a user with an invoice, and sending them to a payment provider to transfer the funds
To calculate the invoice, you'll basically need to run a rake task to do this:
Cycle through the user's Redis store
Store the Redis store in a db (maybe)
Take a "count" of messages
Use simple algorithm to determine price
Create priced invoice & associated record in invoice_messages table (where you can itemise message usage)
Scheduling
Although a relatively small feature, you'll need to schedule your invoice creation
I'm actually thinking about this currently (not much experience), so to do this, you'll need to set up a rake task to cycle through when a user should be invoiced. Depending on your app, you'll have to determine the right invoice date & then run the previous steps depending on it
So here is the basic structure I'm proposing:
Data warehouse (for want of a better word)
E-commerce online
Back-end MIS
etc
So the idea is that I have an Order for example. An order can be created via e-commerce site, or via back-end MIS. Either case the order should filter out to e-commerce to show order to user, and vise versa.
There will be other apps in the future.
So the thinking is, to have a central warehouse that wraps this data in a service API, and then the other apps push / pull to it.
Sound OK? I guess the question is syncing the data. When I create an order, do I push the order at create time to the warehouse, or put it to some queue, or is there some other method to keep all these in sync, assuming, near realtime to realtime sync is required.
Assume your REST server is just another data store. How would each client get updates from a plain old database when needed?
If you had each client poll the data store at regular intervals, that would be one solution.
I am building a Ruby on Rails application where I need to be able to consume a REST API to fetch some data in (Atom) feed format. The REST API has a limit to number of calls made per second as well as per day. And considering the amount of traffic my application may have, I would easily be exceeding the limit.
The solution to that would be to cache the REST API response feed locally and expose a local service (Sinatra) that provides the cached feed as it is received from the REST API. And of course a sweeper would periodically refresh the cached feed.
There 2 problems here.
1) One of the REST APIs is a search API where search results are returned as an ATOM feed. The API takes in several parameters including the search query. What should be my caching strategy so that cached feed can be uniquely identified against the parameters? That is, for example, if I search for say
/search?q=Obama&page=3&per_page=25&api_version=4
and I get a feed response for these parameters. How do I cache the feed so that for the exact same parameters passed in a call some time later, the cached feed is returned and if the parameters change, a new call should be made to the REST API?
2) The other problem is regarding the sweeper. I don't want to sweep a cached feed which is rarely used. That is, search query Best burgers in Somalia would obviously be very less wanted than say Barak Obama. I do have the data of how many consumers have subscribed to the feed. The strategy here should be that given the number of subscribers to this search query, sweep the cached feeds based on how large this number is. Since the caching needs to happen in the Sinatra application, how would one go about implementing this kind of sweeping strategy? Some code will help.
I am open to any ideas here. I want these mechanisms to be very good on performance. Ideally I would want to do this without database and by pure page caching. However, I am open to possibility of trying other things.
Why would you want to replicate the REST service as a Sinatra app? You could easily just make a model inside your existing Rails app to cache the Atom feeds (storing the whole feed as a string inside for example).
a CachedFeed Model which is updated when its "updated_at" is far enough away to be renewed.
You could even use static caching for your cachedFeed Controller to reduce the strain on your system.
Having the cache inside your Rails app would greatly reduce complexity in terms of when to renew your cache or even count the requests performed against the rest api you query.
You could have model logic to distribute the calls you have to the most popular feeds. Tthe search parameter could just an attribute of your model so you can easily find and distinguish them