API to get Orders and Upload Shipment information? - ruby-on-rails

I have tried to find the answer using Google, but I'm a bit confused as there are a number of eBay APIs.
In order to get orders, the Trading API GetOrders command should be used?
I would like to be able to upload via an API a completed order shipment information, tracking, and courier name. Is the CompleteSale command in the the Trading API the correct command to use?
How to get the Authentication codes from an eBay store (my app can connect to many eBay stores)
I'm planning to use this Rails GEM: https://github.com/ReverseRetail/ebay_client

1) Yes, use GetOrders to retrieve orders from eBay (Dont use getsellertransactions it is super buggy). Here is the doc for GetOrders Best Practices. You may also want to take a look at this article Order management using Trading API - GetOrders (I would set the set "Create / Mod TimeTo" time to 5 minutes instead of their recommended 2 minutes.)
2) Yes, you would use CompleteSale, you are going to need three bits of information the OrderID, ShipmentTrackingNumber, and ShippingCarrierUsed. You might also want to note that you cannot use the same tracking number for multiple packages, the API will error thinking you are trying to game the system.
3) If you want perform the trading api requests on a store that you do not have access to, you will need to allow clients to authenticate their store with your app. Documentation on that process is very detailed and can be viewed here: Getting Tokens

Related

How to use Priority API?

I'm trying to create a simple application that reminds me of two weeks before I was supposed to get a delivery. My data is store in a Priority database and I'm looking for a way to read it using code (prefer in Python).
I read about Priority REST API and tested it with the examples that are in the site(https://prioritysoftware.github.io/restapi/request/). it seems like this is the way to do it but I see that it requires a URL to the Priority account and I don't know what is my URL because I use the desktop app.
So I have 2 questions:
Is using the API is the best way?
how do I find the URL to my account?
In addition I would be happy for further help regarding my idea for a program reminiscent of two weeks before receiving a delivery (examples, tips, ways to implement and so on).
Thank in advance
In order to use Priority API you need to install its application server.
More information can be found here.
If you are working locally you can access carefully directly to your SQL server and look for your data.
Priority has built-in functionality to send email reminders (BPM rules)

Google ads api list all accounts and their campaigns

I am totally new to google ads, I have a google ads account setup. I have a customer id (which I believe is the parent account id), under this I have a lot of 'Accounts' (url: ads.google.com/aw/accounts) setup. Every account have a list of campaigns. I want to prepare a report to fetch all campaigns and there settings. I am using postman to hit google apis (googleads.googleapis.com/v8).
I want to know what apis I can use to list all customer accounts and their campaigns?
Using the rest API for this sort of operation is not advised. It's expensive and there's an extremely tight limit.
It's advised to use the gRPC client libraries as they use search operations. You can learn more here: https://developers.google.com/google-ads/api/docs/start
Each of the client libraries has a robust set of examples and sample code that demonstrates exactly what you're trying to achieve (called list_accessible_accounts or similar). If you let me know what language you're using I can point you to the correct one.

App Store price drops and updates info

I found a couple of websites which somehow collect App store price changes, updates, etc.
My question is - where does websites like:
148apps.com
appdropp.com
etc.
get their info? I signed up as Apple affiliate and found that I can request apps info using the search or lookup APIs. But I can't send thousands of requests to check all apps day by day, it seems to be a huge task.
Is there any other available option to get this info?
They may be using this, Enterprise Partner Feed
It looks like it would provide what you're looking.
They are using EPF feed, provided by Apple.
You can check their Python tool to import feed to database.
You can also take a look on websites like:
http://www.apptweak.com, or http://www.appsocean.com
If you know PHP (for example), you can write a simple spider to grab price drops and updates.
Just read more about PHP CURL library.

Some general Twitter4J questions

I'm trying to do a write up of Twitter4J for part of a uni project, but I'm getting hung up on a few things. From the Twitter4J api:
void sample()
Starts listening on random sample of all public
statuses. The default access level provides a small proportion of the
Firehose. The "Gardenhose" access level provides a proportion more
suitable for data mining and research applications that desire a
larger proportion to be statistically significant sample.
This implies that by default, a "default access" is provided to the stream, but another type of access, "Gardenhose access" is available. Is this correct? And if so, how do you access the higher Gardenhose access?
I'm asking as I've seen some answers on SO suggest that there is only one level of access - the Gardenhose, and I'm trying to clear this up once and for all.
In addition to this, I would like a reference (if possible) to the number of tweets the sample stream allows access to. I've read lots of people cite 1% for "default access" and 10% for "gardenhose access" - but I can't find this anywhere in the API.
So to sum up, two questions:
Does the sample stream have a "default access" and a "gardenhose access", or just one of those?
How much of the Twitter firehose stream can these levels of access gain?
If replying, please have links to reference-able API where possible.
The gardenhose is different from the default sample stream, you would have had to request access from Twitter in order to use it.
However, I am not sure if Twitter still allows access to the gardenhose, or even if it still exists. It seems the current mechanism may be to use one of Twitter's preferred data partners:
Using the Streaming API?
Every Twitter account can connect to a small sampling of the Streaming API. Accounts that need increased access for data gathering or analytical reasons should check out our preferred partners page.
(source)
It may be different for students or educational instutions and that the gardenhose is still available to you. Previously you would have to either e-mail api-research#twitter.com or you could use the following form, but I have no idea if these methods work still - the post is quite old.
As for the percentage of Tweets that the default sample stream allows access to, the best reference I could find was a comment made by a Twitter employee on the developer forums - emphasis mine:
I would recommend just using the 1% sample stream from https://stream.twitter.com/1/statuses/sample.json that you can connect to with your Twitter account. It's unlikely that you'll be in a situation where you can access all of the data and will have to make do with a sample. At about 230 million tweets a day, you'd still be theoretically getting 2.3 million tweets a day.
(source)
Although, again this is an old post.
Regarding the firehose stream, as specified by the documentation you need to be granted permission to access it, I believe very few people have full access to this stream:
GET statuses/firehose
This endpoint requires special permission to access.
Returns all public statuses. Few applications require this level of access. Creative use of a combination of other resources and various access levels can satisfy nearly every application use case.
Overall documentation is scarce on the different access levels and what they offer, I suggest contacting Twitter directly to discuss your requirements or contacting one of their data partners.
Apologies if this wasn't as concrete as you would have liked, good luck with your research.

How to query a Twitter timeline in parallel?

I am building a Twitter app and I'll be pulling a big amount of data from the user's timeline. For speed, I need to query the timeline in parallel. My aim is to pull 1000 of user's tweets from the API, but the upper limit of number of tweets per request is set to 200 by the Twitter API. Pagination works by specifying the last (oldest) tweet's ID from the previous request, so I need to know the result of the previous API call to make the next call. This method is not parallelizable. Is there any alternative method for getting the user timeline from the Twitter API where I can make parallel requests (there is the page property, but is deprecated and will be nonfunctional in the near future).
What you have to remember, is that Twitter have a difficult relationship with external developers. Using their API for anything interesting like this is simply not allowed by them.
What you need is access to the Firehose.
However, even if you're willing to pay a million dollars a year - Twitter aren't interested.
You could try getting it from a third party like Gnip but - again - likely to be expensive.
So, essentially, you can't. Twitter just aren't interested in amateur developers doing anything innovative with their platform. Sorry.

Resources