How to make external api call more efficient - system-design

I’m building a shipment tracking system. I have an external service that I can make an API call to get the shipment status. Whenever a shipment status changed, I wanna get notified.
What would be the most efficient way to call the external api to check the status of the shipments? Imagine I have 1 million shipments to track.

You should try to set up a webhook from the external service to your server. When a shipment changes status, the external API should call an endpoint on your server. This is a very common practice for use cases like yours where statuses change and you need to be notified.
Otherwise, you'll just have to poll the API periodically. I'd suggest fitting your calls to the expected pattern of status updates (e.g. if a shipment typically takes a minimum of 2 days to update, then wait 2 days before checking for a status update)

Related

Twilio Studio Flow - Adding a Retry/Delay

I'm trying to figure out how I can implement a retry policy in a Twilio Studio Flow. I see that they have an example, but it only has a delay of no more then 10-seconds.
I want something that can use to retry when my webhooks service is down. I did setup the sample from the Twilio docs but it only seems to work when you want a delay of no more then 10-seconds. But I need it to pause for an hour or two. So say the HTTP Post step fails because the webhooks service is offline, I want it to pause for an hour and try again. Then pause for 2, then 3, then 4, etc. and try again. The point being, I don't want to lose the user's response.
What I am trying to do is not lose any of the user responses from a survey if my webhooks application goes down. We saw this happen in production for a couple of hours and we lost survey response from 200 users.
If this is not possible, is there a way I can reach back out to Twilio logs and get access to the responses that failed while the webhooks service was down? I recall running into something where you can pull back the logs, which could then be used to identify the ones that failed.
This kind of logic isn't really built into Studio. Ten second waits are typically the most you will see due to both Twilio Functions & the http request widget timing out at this point.
If you wish to include this kind of wait then you will need some sort of workaround where you go into a send & wait for reply widget (which ignores responses from your customers with some additional logic) and has a timeout set to the amount of time you want to wait. You can then transition to the webhook request again and re-attempt.
Alternatively, you can create a utility which uses the Execution resource to find all the failed flows for a given time period so you can choose how best to move forward.

Webhooks v/s polling for all users drives' within an Org (multi-tenant) - MS Graph

We are going to have a multi-tenant application that is going to be scanning files for each user per organization for multiple orgs. We would like to get notification if any user uploads/changes a file in the drive. There are at-least 2 options to retrieve those: either store delta link for each user and poll periodically to get the change or subscribe using webhooks to get notification on change. If we have 10k+ users, I am not sure if the first option is feasible. For the latter one, my only concern with webhook is do I have to register for each user separately? ie., does the resource need to be /users//drive/root or should it just be /drive/root? Since there is a limitation of no. of webhooks per app/tenant, I am not sure if creating webhooks for each user is a right approach.
Please advise.
The limitation applies to the users/groups objects (i.e. if you wanted to subscribe to users/groups being updated), not to the drives.
Yes, you need to subscribe to each drive individually, and to renew the subscriptions individually as well. If you want to save the number of roundtrips, you can group those operations in a batch.
You can also combine the delta + webhooks: do the initial sync with delta and store the token, register your webhook. And then trigger querying the delta link upon receiving a change notification. This way you get the best of both features and avoid regularly polling delta when there might not be any change.

Reliably implement presence status with ActionCable

I have implemented a chat feature using ActionCable. I am now trying to implement a presence status based on the implementation of user appearances in the README.
This documentation mention the following statement:
The #subscribed callback is invoked when, as we'll show below, a
client-side subscription is initiated. In this case, we take that
opportunity to say "the current user has indeed appeared". That
appear/disappear API could be backed by Redis or a database or
whatever else.
I can implement an online attribute in my database and update it when the application receives appear/disappear notifications. But I have no guarantee about the reliability of this attribute. It could become out of sync in case of a server failure for example.
How could I implement this in a reliable way?
Place it in a Redis structure that expires in a certain amount of time (use TTL). If you store it somewhere for an infinite amount of time (like the DB) it can go out of sync. You might argue that you can set all user presence to false on application startup, but that will only work until you run multiple servers or workers. While a user is connected: insert a presence value for this user into Redis every few minutes. Also handle the connection close event to delete the user presence from Redis for higher accuracy than a few minutes.

Shopify not calling on my fulfillment /fetch_tracking_numbers

I am Developing a new warehouse integration for the company I work for as there was not existing solution.
I have gotten almost every feature to work including
fulfillment request and stock requests and even registered a carrier service for real time shipping rates however for some reason i can not get the
/fetch_tracking_numbers call to fire from shopify according to the documentation
"Once per hour Shopify will make a request to this endpoint if there are any completed fulfillments awaiting tracking numbers from the remote fulfillment service."
however I have added logs to the call so i can trouble shoot it however it seems that shopify never makes this call to the server.
If I visit the url myself i can fire the code (logs and all) however it doesn't seem like shopify is doing so
In the install I made sure to provide a valid call back url (thats why fetch stock works fine) and set the tracking support field to true but still nothing
One way to be sure would be ensure that a product's variant is set to your custom fulfillment company. Then complete a bogus order for the product. Now fulfill it. Once you have fulfilled it, Shopify will poll your end point for order's and their tracking numbers. It works fine for me... but I am thinking maybe you're waiting for Shopify and nothing is actually fulfilled.
I am probably too late to answer this but Shopify will make calls to this only if
1) you have "completed" fulfillments
2) tracking number for this completed fulfillments is pending.
You need to mark "complete" a fulfillment after shipping all the order line items

How to get location field from linkedin api while fetching network updates

I want to fetch location of person and connections so how should I specify fields for this purpose?
http://api.linkedin.com/v1/people/~/network/updates:(update-content:(person:(id,headline,location)))?type=CONN
If I'll make another calls for just getting location, it will be very costly for me, as it will require to make extra calls for each of new connection and will increase number of calls exponentially. So, I want some solution using which I can get location in network updates API call itself.
EDIT: And another thing I need is to check about the privacy setting of connections. As per my knowledge, linkedin doesn't provide any api which returns that which connection allows to see updates and which are not. So, when I try to get network update for a particular connection, it returns error that this user doesn't allow public to see updates. If I want to check this thing before call network updates API, how can I do it in Ruby Language.
OR
Let me know some way to pass multiple dynamic IDs while calling linkedin API.
When retrieving person data associated with a Network Update, it appears that only the basic fields are available. The solution would be to get the id for the person and make a second call to the Profile API:
http://api.linkedin.com/v1/people/id=12345:(first-name,last-name,connections,location)
Currently, linkedin doesn't provide any API for this purpose. You have to make multiple calls for this purpose. But you should make these calls in chunks to avoid timeout issues.
Reference
Try this api
`String url = "https://api.linkedin.com/v1/people/~/connections:(id,first-name,last-name,location,picture-url,positions:(title,company:(name)))"; `

Resources