We are using graphql-ruby in one of our internal projects: A Rails API backend serving a React Native Web frontend. I'm curious as to what is considered best practice in regard to handling ordering of returned results.
One option I see is that we provide both order_direction and field_to_order_by arguments, and the client must explicitly state each for the query (providing defaults as well, of course).
One way to handle this would be
if (sort_column = args[:sort_by])
if (direction = args[:direction])
users = users.order(sort_column.to_sym => direction.to_sym)
else
users = users.order(sort_column.to_sym) # default sort order
end
end
Another option of course would be to provide all results in a pre-defined direction (ASC or DESC) and have the client itself reorder. This seems very inefficient, however. As there's a real dearth of information on how to approach this, I'm curious what's considered best practice.
Any help appreciated!
As best practice ordering results should be made as far down the server as possible (database).
But, as I understand you're questioning between ordering results on the server side (GraphQL API) or on the frontend side (React Native Application), so:
I would recommend providing the client application with an option to get the results in a specific order and handle the sorting on the Server API, that way the client application should only display the results without the need of spending time processing them.
Related
So, i'm relatively new to Vue, and I'm currently using it to build a small app that displays order data from Square's API.
I'm currently working on a stack that uses rails to make api calls using the square.rb gem. The frontend is entirely Vue which uses Pinia as a store, and there isnt going to be any kind of database behind this because reasons.
All data is provided directly via Square's API. I am currently polling to update order info, but my client wants to make this app truly real time, as it deals with food deliveries through ride-share companies and the purpose of this app is to show in real time statuses of orders for an in house screen at the restaurant.
Now Square has a webhook subscription service, and based on my reading it sounds like I can consume these to update my app, but there are a few logical leaps that I havent been able to make yet with how to get that data to the frontend of my app.
My questions are the following, with the intent being to connect the dots between the different technologies I might need to employ here to make this work. Kinda get a sense of what i'd need and where to link it up.
Can I use vue to consume webhook payloads directly and update through reactivity? That would be ideal, but I have found no docs yet that give me a good idea of whether thats possible.
If that is not possible, do I need to use some sort of socket connection (socket.io) to listen for these webhook updates?
If the current setup or proposed setup in the questions above is not feasible, what is a better solution for handling this while still using Vue?
I have an app that allows users to sort and filter through 30,000 items of data. Right now I make fetch requests from Redux actions to my rails API, with the queries being handled by scope methods on my rails end. My instructor is recommending that I move all my querying to my front-end for efficiency, but I'm wondering if it really will be more performant to manage a Redux state object with 30,000 objects in it, each with 50 of their own attributes.
(A couple extra notes: Right now I've only run the app locally and I'm doing the pagination server-side so it runs lightning fast, but I'm a bit nervous about when I launch it somewhere like Heroku. Also, I know that if I move my querying to the front-end I'll have more options to save the query state in the URL with react-router, but I've already sort of hacked a way around that with my existing set-up.)
Let's have a look at the pros and cons of each approach:
Querying on Front End
👍 Querying does not need another network request
👎 Network requests are slower because there is more data to send
👎 App must store much more data in memory
👎 Querying is not necessarily more efficient because the client has to do the filtering and it usually does not have the mechanisms to do so effectively (caching and indexing).
Querying on Back End
👍 Less data to send to client
👍 Querying can be quite fast if database indexes are set up properly
👍 App is more lightweight, it only holds the data it needs to display
👎 Each query will require a network request
The pros of querying on Back End heavily outweighs that on Front End. I would have to disagree with your instructor's opinion. Imagine you want to search for something on Google and Google sends all relevant results you want to your browser and does the pagination and sorting within your browser, your browser would feel extremely sluggish. With proper caching and adding database indexes to your data, network requests will not be a huge disadvantage.
I am currently constructing a RESTful web service using node.js for one of my current iPhone applications. At the moment, the system works as follows:
client makes requests to node.js server, server does appropriate computations and MySQL lookups, and returns data
client's reactor handles the response and updates the UI
One thing that I've been thinking about is the differences (in terms of performance and best practice) of making multiple API calls to my server vs one call which executes multiple join statements in the MySQL database and then returns a constructed object.
For example:
Lets say I am loading a user profile to display in the UI. A user has a profile picture, basic info, and news feed items. Using option one I would do as follows:
Make a getUser request to the server, which would do a query in the DB like this:
Select * from user join user_info on user.user_id=user_info.user_id left join user_profile_picture on user_profile_picture.user_id=user.user_id.
The server would then return a constructed user object containing the info from each table
Client waits for a response for the server and updates everything at once
Option 2 would be:
Make 3 asynchronous requests to the server:
getUser
getUserInfo
getUserProfile
Whenever any of the requests are received, the UI is updated
So given these 2 options, I am wondering which would offer better scalability.
At the moment, I am thinking of going with option 2 for these reasons:
Each of the async requests will be faster than the query in option a, therefore displaying something to the user faster
I am also integrating Memecache and I feel that the 3 separate calls will be easier for caching specific results (e.g not caching a user profile, but caching user, user_info and user_profile_picture).
Any thoughts or experiences?
I think the key question here is whether or not these API calls will always be made together. If they are, it makes more sense to setup a of a single endpoint and perform a join. However, if that is not the case then you should keep the separate.
Now, what you can do is of course use a query syntax that let's you specify whether or not a particular endpoint should give you more data and combine it with a join. This does require more input sanitation, but it might be worth it, since you could then minimize requests and still get an adaptable system.
On the server side, it's unlikely that either of your two approaches should be noticably slower than the other unless you're dealing with thousands of rows at a time
Say you have a Recipe Manager application that you're building with a Web Api project. Do you send the list of recipes along with their ingredient names in JSON? Or do you send the recipes, ingredient names, and ingredient details? What's the process in determining how big the initial payload should be for a SPA?
These are the determining factors in how much to send to the client in an initial page:
Data that will be displayed for that first page
Lookup list data for any drop downs on that page
Data that is required for and presentation rules (might not be displayed but is used)
On a recipe page that would show a list of recipes, I would get the recipes and some key factors to display (like recipe name, the dish, and other key info) that can be displayed in a list. Enough for the user to make a determination on what to pick. Then when the user dives into a recipe, then go get that 1 recipe's details.
The general rule is get what you user will almost certainly need up front. Then get other data as they request it.
The process by which you determine how much data to send solely depends on the experience you want to provide your users - however it's as simple as this. If my experience demands that I readily display all of the recipes with a brief description and then allow them to drill into the recipe to get more information, then I'm only going to send enough information to produce the display and navigate further into the entity.
If then after navigating into the recipe it requires that you display the ingredient names and measures then send down that and enough information to navigate further into any single ingredient.
And as you can see it just goes on and on.
It depends if your application is just a simple HTTP API backing your web page, or your goal is something more akin to Platform As A Service. One driver for the adoption of SPA is that it makes the browser another client, just like an iOS or Android app,or a 3rd party.
If you want to support multiple clients, then it's likely that you want to design your APIs around the resources that you are trying to expose, such that you can use the uniform interface of GET/POST/PUT etc. against that resource. This will means it is much more likely that you are not coding in an client specific style and your API will be usable by a wide range of clients.
A resource is anything you would want to have its own URN.
I would suggest that is likely that in this case you would want a Recipe Book resource which has links to individual Recipe resources, which probably contain all the information necessary for that Recipe. Ingredients would only be a separate resource if you had more depth on what an Ingredient contained and they had their own resource.
At Huddle we use a Documentation Driven Design approach. That is we write the documentation for our API up front so that we can understand how usable our API would be. You can measure API quality in WTFs. http://code.google.com/p/huddle-apis/
Now this logical division might not be optimal in terms of performance. Your dealing with a classic tradeoff (ultimately architecture is all about balancing design tradeoffs) here between usability of your API and the performance of your API. Usually, don't favour performance until you know that it is an issue, because you will pay a penalty in usability or maintainability for early optimization.
Another possibility is to implement the OData query support for WebAPI. http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api
That way, your clients can perform their own queries to return only the data they need.
Hi i am a student doing my academic project.I need some guidance in completing my project.
My project is based on grails framework which searches for books from 3 different bookstores and gives d price from all the 3 stores.I need help in searching part.
how to direct the search for those bookstores once user types for required book.
thanks in advance
You need to give more details. By searching bookstores, do you mean searching in a database or are these like Amazon etc?
I would find out if these online bookstores have APIs, or if you have a choice, select the online bookstores that do have APIs that you can use to do your searching. For example, Amazon has a "Product Advertising API" that can be used to perform searching of its catalogue (see http://docs.amazonwebservices.com/AWSECommerceService/latest/DG). You usually have to register as an affiliate to get access these sort of things.
Once you have several online bookstores that are accessible via APIs, it is relatively easy to write some grails code to call them, and coordinate the results. APIs usually take the form of Web requests, either REST or SOAP (e.g. see Amazon - AnatomyOfaRESTRequest). Groovy's HTTPBuilder can be used to call and consume the bookstores' API web services if you can use simple REST, or I believe there are a couple of Grails plugins (e.g. REST Client builder). For SOAP, consider the Grails CXF Client Grails plugin.
You could do the searches on the APIs one by one, or if you want to get more advanced, you could try calling all 3 APIs at the same time asynchronously using the new servlet 3.0 async feature (see how to use from Grails 2.0.x: Grails Web Features - scroll to "Servlet 3.0 Async Features"). You would probably need to coordinate this via the DB, and perhaps poll through AJAX on your result page to check when results come in.
So the sequence would be as follows:
User submits search request from a form on a page to the server
Server creates and saves a DB object to track requests, kicks off API calls asynchronously (i.e. so the request is not blocked), then returns a page back to the user.
The "pending results" page is shown to user and a periodic AJAX update is used to check the progress of results.
Meanwhile your API calls are executing. When they return, hopefully with results, they update the DB object (or better, a related object) to store the results and status of the call.
Eventually all your results will be in the DB, and your periodic AJAX check to the server which is querying the results will be able to return them to the page. It could wait for all of the calls to the 3 bookstores to finish or it could update the page as and when it gets results back.
Your AJAX call updates the page to show the results to the user.
Note if your bookstore doesn't have an API, you might have to consider "web scraping" the results straight from bookstore's website. This is a bit harder and can be quite brittle since web pages obviously change frequently. I have used Geb (http://www.gebish.org/) to automate the browsing along with some simple string matching to pick out things I needed. Also remember to check terms & conditions of the website involved since sometimes scraping is specifically not allowed.
Also note that the above is a server oriented method of accomplishing this kind of thing. You could do it purely on the client (browser), calling out to the webservices using AJAX and processing via JavaScript. But I'm a server man :)