Is it better to call Google Places API on server side or directly from my iOS Application - ios

I am working on an iPhone application and need to implement the Google Places auto-suggest functionality. However, I cannot use the textbox control provided by Google as I need to do some processing on the data before displaying the list to the user. The auto-suggest is a time critical functionality and therefore I need to know if I should call the Google API from my server and have my application make a call to the my server to do this (since the user's connection might be slow), or is there a good reason to still call the Google API from the Phone App itself.
Thanks

The advantage of using client side api calling is the processing and bandwidth will be shared among the client devices, which saves you from high server side costs after deployment
If the client side response time is the motive, I would suggest again the client side calling instead of server side calling, because there is only one request instead of two.Try to parse the JSON data in client side, and its less data intensive and reduce the number of records requested at a time.
Anyways, a slow internet connection gonna choke your app, so think twice before going for server side...

Related

Intercept all REST API request made from local machine

I have a large JAVA application which connects to hundreds of cloud based systems using their REST API's and fetch the data from those systems.
To connects those different cloud systems we have different modules and each one have different approach to call REST API's like some modules using apache rest client some module using googles rest client.
So there is no centralise place where the REST api is getting called.
I have to track performance of the application e.g. to fetch accounts info from test system takes 1 hour. and this process need
4 api calls for https://test/api/v2/accounts -- (this will return all account id's)
8000 api calls for https://test/api/v2/accounts/{accountId}. --- (this will return deaths of each account)
I need to track what is the time taken by each api to responds and based on that calculate time taken by application to process that data.
Important part here is deatiled api analysis and make graphical data if possible e.g.
4 api calls for https://test/api/v2/accounts --- taken 3 minutes
8000 api calls for https://test/api/v2/accounts/{accountId} -- taken
48 minutes
I need any any pointer how can I achieve that something like intercept all rest api made to https://test/api/v2
As you've probably already discovered, without some extra tweaking, wireshark just shows you the connections at the FQDN level: you can't see which individual endpoint is called (because TLS, by design, hides the content of the connection). You have a few options though:
if you control the APIs that are being connected to, you can load the
TLS keys into wireshark, and it'll let you decrypt the TLS
connection;
if you can force your app to use a proxy, you can use a Man-In-The-Middle (MITM) proxy (like Burp) to intercept the traffic; or
you can instrument your app to log destination and duration for all the API requests.

Keeping users in sync with each other in an social network app?

I am wondering about what the best way to keep users in sync with each other in a social network is. The concerned stack is an iOS app with a NodeJS backend. Let me give you an example:
Say X and Y are friends on a social network. Y's posts appear in X's feed, and as such, Y is cached somewhere on the X's phone. This morning, Y decided to change profile pictures however. Everything is well, the new picture is uploaded to the server, but how do we go about letting X know about the change of profile picture?
My possible solution: Create a route /<UID>/updates that contains a stack of "cookies" which lets the user know what and who changed since the last time they made a GET request to the route.
This seems elegant enough, but what worries me is what happens on the client side (am I supposed to make a GET request every 2 minutes during my app's uptime?). Are there any other solutions?
One solution is indeed to poll the server, but that's not very elegant. A better way is to make use of websockets:
WebSockets is an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply.
They are a 2-way connection between client and server, allowing the server to notify the client of any changes. This is the underlying technology used in the Meteor framework for example.
Take a look at this blogpost for an example of how to use websockets between an iOS client and a NodeJS backend. They make use of the open source SocketRocket iOS library.

How to keep the application alive without a client

My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

getting random 404 errors using Valence

When I make API calls to the server, I'm getting 404 errors for various data -- grades, role IDs, terms -- that I won't get on the next time I call it. The data's there on the server, viewable by the same user, and is often returned successfully, but not every time. The same user context will return data successfully for other calls.
Any ideas what could be causing this?
I'm using the Valence API with the Python client library and our 9.4.1 SP18 instance of Desire2Learn in a non-interactive script.
more detail: the text it returns on the bad 404s is " ErrorThe system cannot find the path specified."
It would help enormously to gather data about your case: packet traces that can show successful calls from your client alongside unsuccessful calls, in particular, would be very useful to see. If you are quite certain (and I see no reason you shouldn't be from your description) that you're forming the calls in the right way each time you make them, then the kind of behaviour you're noticing would seem to speak to some wider network or configuration issue: sometimes your calls are properly getting through the web service layer, and sometimes they are not -- this would seem therefore not to be down to the way you're using the API but in the way the service is able to receive that request.
I would encourage you, especially if you can gather data to provide showing this behaviour, to open a support incident with Desire2Learn's help desk in conjunction with your Approved Support Contact, or your Partner Manager (depending on whether you're a D2L client or a D2L partner).

Resources