I've been looking at the APIs for UPS, FedEx, USPS, etc, and I can't seem to find any method of pulling all tracking info for a given user. I only see methods that pull info via a tracking number. Has anyone been able to find a way to get at this data? Seems silly to me that these huge carriers wouldn't supply this info in an easy method.
I'm trying to accomplish this in Rails.
We were able to integrate with UPS Quantum View and even with FedEx Insight. These services will give you a list of all inbound and outbound shipments that are billed to your UPS/FedEx account. You can get info on every piece of each shipment: tracking numbers, weight, shipper and recipient info (Company name, city, state, country).
To pull information from UPS Quantum View using their API you will need to obtain a so-called Access Key, and you'll need to create Subscriptions: one for inbound shipments, one for outbound ones. This can be done on ups.com if you already have a UPS shipping account. You don't have to wait, it's provided instantly. We have a video of how to get the key and set up the subscriptions. It's on easytag.net in the Help section. The video title is Obtaining a UPS Access Key.
When creating API requests to UPS, you'll need to include a key and a subscription name.
UPS has a Quantum View API. Quantum View is their service that allows tracking, etc by account, not just individual tracking number. I assume that will get you what you need. I don't have an account so I can't see their detailed API documentation, so I'm just guessing.
Related
According to this page,
you can lookup Spaces created by accounts followed by a specific user. Given a single user ID, a dedicated endpoint will traverse the user’s followings and return live or upcoming Spaces created by any of these users.
I would like to check periodically for Spaces scheduled by any of the ~10k accounts I follow, but I can't find the "dedicated endpoint."
Unfortunately this is an error in the Twitter developer documentation - this functionality does not exist in the API. You'd need to build this via a much more manual process of calling the lookup endpoint by creator ID - the endpoint can support up to 100 user IDs at a time, so that will take some time to cover your larger number of accounts followed.
(side note, I've asked for the documentation to be updated to correct this confusing statement)
On a ride booking app, it is required communication between driver and user.
Now the case, if user A contacts the driver via website or app, call or sms can be achieved via Twilio, we don't want to expose their contact numbers to each other.
If three users A, B and C contacts the driver and driver has no app installed, in fact the driver wants call back and sms reply. How the driver can reach users on Caller ID.
There could be large number of users and we can't buy separate twilio number for each user.
Please advise the solution.
How many users are likely to need to contact each individual driver at any one time? Not many I wouldn't think.
Buy 10 Twilio numbers, assign them incrementally as users call/SMS their driver and save the assignment for user/driver numbers in your database.
If the driver calls/SMS a number in response query the database and route the call/SMS to the user it was assigned to when they called the driver.
Recycle the 1st assignment once the 11th user calls/SMS the driver, rinse and repeat.
Twilio developer evangelist here.
In order to maintain anonymous communications in this way you need as many numbers as the maximum number of relationships one person in your system has. The best explanation of this is in this article on masked text messaging with Twilio (though it applies to calls too).
Your comment on miknik's answer suggests you want to keep these relationships alive forever. This is not the way that most services build out this feature. They normally give a particular length to the relationship, Uber for example will recycle the phone number a number of minutes after a ride ends.
If you are looking for an easier way to manage this kind of number pooling and masking, check out Twilio Proxy, it handles a lot of the logic for you. It is still in developer preview right now, but you can apply for early access.
So I got a task to prepare a simple analysis on how useful, from sociometrical point of view, are Slack API methods (https://api.slack.com/methods).
Yesterday I didn't even know that such thing as sociometry exists, and i still dont know how to evaluate any API using its methodology. Does anyone here ever got a similar task, or have any idea how to approach such analysis? What literature will be useful? I don't mean this analysis to be particularly long, but as for now I don't even know where to start.
Frankly, I am not an expert on sociometry , but here is how I would approach it:
I would assume the goal is to create a sociogramm depicting the relationships between all users on a Slack team using the API methods. So the question is how useful the API methods are to achieve that goal.
Slack does not have a "friends list", like Facebook, so you have to come up with your own approach on how to identify relationships on Slack. Slack is a messaging system, so it makes sense to define it based on who is communicating with whom.
Lets define users to have a relationships if they are
direct messaging each other (including groups)
talking to each other in a channel (using the #user
mention)
or just being part of the same channel and talking in the channel
Now to assess the effectiveness of the API methods. The basic approach would be to retrieve the messages of a public channel with channels.history (or im.history for direct messages, groups.history for for private channel and mpim.history for direct messaging channels with multiple participants) for a given time period. In addition you can retrieve the members of a channel with channels.info (or their pendants for the other channel types). Then you would parse all retrieved messages and the member list of a channel to identify the relationship and calculate the sociagram.
However, Slack will only allow users to access channels, that they are members of. That includes access through the API and that includes users with the role admin and owner.
So its not possible to see all direct messages, groups chats and private channel of a Slack team through the API and we would therefore need to limit the approach to public channels and some private channel. Depending on where most of the conversation is happening on a specific Slack team and which private channels our slack user is a member of this could significantly limit the ability to calculate a complete sociogram.
In summary you can use the API methods to calculate a sociogram for your Slack team based on which users users are communicating with each other. But that analysis will not be 100% complete, since its not possible to access all private communication on a Slack team though the API. The calculated sociogram might still be useful though, if the Slack user doing the calculation has access to all relevant private channels.
I am creating a very basic online store that allows users to buy and customize certain products. I have gotten the payment system working using Stripe, and all that is left to do is provide the seller with a place to view the completed orders (which should contain Shipping Address, order configuration, etc).
I expect that this app will receive very, very low traffic (it's more for fun than anything), so I do not need a super robust admin system. I thought it would actually be sufficient to pass order information to stripe as metadata, and have the seller view the order information on stripe. However, a potential problem I see is that there might be more data than the metadata limit (20 key/value pairs, 500 val limits). Would it be better to create an admin system on my side (using webhooks to notify the application when the payment has been processed)? Thanks!
Stripe is really only meant to handle the payments part of the equation. The order part is normally handled on top of Stripe (either in your own system or some third party), with that system linking order ids to charge ids.
Having your own order admin page would normally make more sense in the Stripe model, since Stripe only stores the amount charged and not much more.
Also unless you're doing subscriptions, no need to wait for a webhook. The Create Charge API is synchronous so you'll know when the payment was processed instantly.
We just launched and are looking to better understand where the users who are converting to registered users are actually coming from. We can see our traffic sources and referrals via Google Analytics and our other web statistics programs, but in volume, it's difficult to tie these specifically to which users in our database have converted and from where.
We have several "goals" in Google Analytics setup to better help track conversions, but what are others doing to associate user signups with inbound traffic sources?
One thought we've been kicking around - capturing the referral on the first page load and pass it along in the session into the registration form where you store it into the user record.
Any other solutions that are working successfully for you?
Thanks!
Indeed, I would suggest storing the referrer in the user record. Then you can write some code to sensibly draw out additional data from the URL. For instance, you could parse Google URL's to determine the keywords used to discover your site. And your code could detect things like referrals from ad runs, specific SEO campaigns you're running, or partner deals you have going.
It would be beneficial to spend some time building an admin-only page to visualize these conversions to help you better learn what is working and what isn't. And when things are going well, such a page is encouraging for the whole team!
Capturing referral is a good start. You should capture it to persistent cookie instead of a session so that if user returns tomorrow it still has the same referral information.
I've created a gem to automate tracking and saving referral infos. See https://github.com/holli/referer_tracking for more info.
Some notes when designing tracking (I've tried to catch these with the gem already)
It might be better to save tracking data to separate table. So that when you delete user account you won't delete information about how that user account was created. You get the answer like "where does bogus user accounts come from?"
Save also cookies to db. If you are using Google Analytics you can parse Googles cookies to get additional information about the visitor. Like the number of visits or campaign information.
It's good to save also user_agents etc to be able to differ between mobile and desktop browsers etc.
In the end its good to visualize the information and conversions. But in the beginning its hard to know what data you want to visualize and how. So try to capture as much data as possible and then later decide how to crunch that data with scripts.