I have a large JAVA application which connects to hundreds of cloud based systems using their REST API's and fetch the data from those systems.
To connects those different cloud systems we have different modules and each one have different approach to call REST API's like some modules using apache rest client some module using googles rest client.
So there is no centralise place where the REST api is getting called.
I have to track performance of the application e.g. to fetch accounts info from test system takes 1 hour. and this process need
4 api calls for https://test/api/v2/accounts -- (this will return all account id's)
8000 api calls for https://test/api/v2/accounts/{accountId}. --- (this will return deaths of each account)
I need to track what is the time taken by each api to responds and based on that calculate time taken by application to process that data.
Important part here is deatiled api analysis and make graphical data if possible e.g.
4 api calls for https://test/api/v2/accounts --- taken 3 minutes
8000 api calls for https://test/api/v2/accounts/{accountId} -- taken
48 minutes
I need any any pointer how can I achieve that something like intercept all rest api made to https://test/api/v2
As you've probably already discovered, without some extra tweaking, wireshark just shows you the connections at the FQDN level: you can't see which individual endpoint is called (because TLS, by design, hides the content of the connection). You have a few options though:
if you control the APIs that are being connected to, you can load the
TLS keys into wireshark, and it'll let you decrypt the TLS
connection;
if you can force your app to use a proxy, you can use a Man-In-The-Middle (MITM) proxy (like Burp) to intercept the traffic; or
you can instrument your app to log destination and duration for all the API requests.
Related
Using the Smooch API, I am trying to obtain all of the messages sent to my Facebook appid in the past few minutes or hours.
The Get Messages REST method does exactly what I need, except for that it only returns messages from a particular appUserId. This isn't useful unless you already know what users have sent you messages. I cannot use a webhook as the application resides behind a corporate firewall. Opening the firewall to connections that originate from the outside is not an option (even with white-listing).
Is there a way to invoke the Get Messages REST method such that it will ignore the appUserId filter? Perhaps some sort of wildcard character?
GET {{url}}/{{apiVersion}}/apps/{{appId}}/appusers/{{appUserId}}/messages
Unfortunately you do need to have the appUserId (or userId) on hand in order to query user messages.
Webhooks are a pretty essential part of building a Smooch integration. If you can't receive them through your firewall, then you might consider building an intermediary service outside of your corporate network for receiving Smooch webhooks. For each webhook event you receive it would either:
Forward it through a secure tunnel into your coprorate network
Store the appUserId (or the whole event) in its own database, and provide a secure endpoint that allows your corporate network service to query that data
I'm curious to know more about your use case, e.g which Smooch channels are you integrating? With more details I might be able to improve this answer.
#alavers We would like to leverage nearly every messaging integration you offer.
#alavers You may want to consider providing a Get Messages variant that is better suited for use within a corporate firewall environment. An excellent example is the http long poll implementation provided by APIs such Amazon's SQS API. Their receiveMessage method waits for up to the specified time period but returns as soon as a message is received. This provides nearly the same performance of a webhook but eliminates the need for a customer to open their corporate firewall to connections that originate from outside the corporation. Most IT departments will approve connections that originate from within the corporation, but permitting connections that originate from the outside becomes a very difficult sell.
I want to understand how can I integrate VoIP for my WEB app which in my case would be a Rails app.
What I wan't to achieve is sending socket events to the front-end for each call state:
call ringing
call started
call ended
The implementation is already done but I'm not convinced if is the right architecture and the informations I found until now over the internet are poor.
I don't think that makes sense to explain how is currently done (but if needed I can provide), but starting from ruby-asterisk gem which can be used to retrieve data about an extension number what would be the correct architecture in order to retrieve continuously events from call states and send them as socket events to the WEB?
How can you determine if the call is ended?
On the overall implementation, do you see any use of redis for saving previous states of a call and then to determine the new states?
Main issue is : asterisk is PBX
Again: it is small office PBX, not all-in-one platform with API.
So correct architecture for high load is centralized hi-perfomance socket server, which support auth, response on your api calls(if any), event notification etc etc. After that you have use AMI+ dialplan to notify you server about actions on PBX.
You web app should connect to thoose server, not directly to asterisk. Only ONE connection to asterisk recommended for peformance considerations.
If you have low load - doesn't matter what you do, it likly will work ok.
Asterisk not support redis, so use of that unlikly. Use CDRs for end event.
I'm using Electron, which is based on Chromium, to create an offline desktop application.
The application uses a remote site, and we are using a service worker to offline parts of the site. Everything is working great, except for a certain situation that I call the "airplane wifi situation".
Using Charles, I have restricted download bandwidth to 100bytes/s. The connection is sent through webview.loadURL which eventually calls LoadURLWithParams in Chromium. The problem is that it does not fail and then activate the service worker, like no connection at all would. Once the request is sent, it waits forever for the response.
My question is, how do I timeout the request after a certain amount of time and load everything from the service worker as if the user was truly offline?
An alternative to writing this yourself is to use the sw-toolbox library, which provides routing and runtime caching strategies for service workers, along with some built in options for helping with these sorts of advanced use cases. In particular, you'd want to use the networkTimeoutSeconds parameter to configure the amount of time to wait for a response from the network before you fall back to a previously cached response.
You can use it like the following:
toolbox.router.get(
new RegExp('my-api\\.com'),
toolbox.networkFirst, {
networkTimeoutSeconds: 10
}
);
That would configure a route that matched GET requests with URLs containing my-api.com, and applied a network-first strategy that will automatically fall back to the previously cached response after 10 seconds.
Here is my need:
I have to displays some information from a web page.
The web browser is actually on the same machine (localhost).
I want the data to be updated dynamically by the server initiative.
Since HTTP protocol is actually a request/response protocol, I know that to get this functionality, the connection between the server and the client (which is local here) should be kept open in some way (Websocket, Server-Sent Events, etc..)
Yes, "realtime" is really a fashion trend nowadays and there are many frameworks out there to do this (meteor, etc...)
And indeed, it seems that Rails supports this functionnality too in addition to using Websockets (Server-Sent Events in Rails 4 and ActionCable in Rails 5)
So achieving this functionnality would not be a big deal, I guess...
Nevertheless what I really want is to trigger an update of the webpage (displayed here locally) from a request made by another client..
This picture will explain that better :
At the beginning, the browser connects to the (local) server (green arrows).
I guess that a thread is executed where all the session data (instance variables) are stored.
In order to use some "realtime" mechanisms, the connection remains open and therefore the thread Y is not terminated. (I guess this is how it works)
A second user is connecting (blue arrows) to the server (could be or not be the same web page) and make some actions (eg. posting a form).
Here the response to that external client does not matter. Just an HTTP OK response is fine. But a confirmation web page could also be returned.
But in anyway the thread X (and/or the connection) has no particular reason to be kept.
Ok, here is my question (BTW thank you for reading me thus far).
How can I echo this new data on the local web browser ?
I see 2 differents ways to do this :
Path A: Before terminating, the thread X passes the data (its instance variables) to the thread Y which has its connection still open. Thus the server is able to update the web browser.
Path B: Before terminating the thread X sends a request (I mean a response since it is the server) directly to the web browser using a particular socket.
Which mechanisms should I use in either method to achieve this functionnality ?
For method A, how can I exchange data between threads ?
For method B, how can I use an already opened socket ?
But which of these two methods (or another one) is actually the best way to do that?
Again thank you for reading me thus far, and sorry for my bad english.
I hope I've been clear enough to expose my need.
You are overthinking this. There is no need to think of such low-level mechanisms as threads and sockets. Most (all?) pub-sub live-update tools (ActionCable, faye, etc.) operate in terms of "channels" and "events".
So, your flow will look like this:
Client A (web browser) makes a request to your server and subscribes to events from channel "client-a-events" (or something).
Client B (the other browser) makes a request to your server with instructions to post an event to channel "client-a-events".
Pub-sub library does its magic.
Client A gets an update and updates the UI accordingly.
Check out this intro guide: Action Cable Overview.
I am working on an iPhone application and need to implement the Google Places auto-suggest functionality. However, I cannot use the textbox control provided by Google as I need to do some processing on the data before displaying the list to the user. The auto-suggest is a time critical functionality and therefore I need to know if I should call the Google API from my server and have my application make a call to the my server to do this (since the user's connection might be slow), or is there a good reason to still call the Google API from the Phone App itself.
Thanks
The advantage of using client side api calling is the processing and bandwidth will be shared among the client devices, which saves you from high server side costs after deployment
If the client side response time is the motive, I would suggest again the client side calling instead of server side calling, because there is only one request instead of two.Try to parse the JSON data in client side, and its less data intensive and reduce the number of records requested at a time.
Anyways, a slow internet connection gonna choke your app, so think twice before going for server side...