FedEx Track Web Service isn't recognizing any tracking numbers - fedex

I'm trying to use the FedEx API to track packages. I can authenticate to their test server successfully (using my user credentials, account number, and meter number). However, I receive the same unhelpful response for most tracking numbers that I use in my requests; both test tracking numbers (like 999999999999) and real tracking numbers (that work well on the FedEx website) return the following:
Error Code 9040.
No information for the following shipments has been received by our system yet. Please try again or contact Customer Service at 1.800.Go.FedEx(R) 800.463.3339.
The only requests that fetch a different response are the clearly invalid ones, like "test", which returns:
Error Code 5508.
Invalid tracking number.
I tried SOAP requests using their wsdl (TrackService_v5) as well as manual non-SOAP HTTP POST requests, but their responses are exactly the same in both cases. Is something wrong on their side, or am I doing something wrong?

It seems that FedEx has disabled any test tracking numbers, in the past 999999999999 would work just fine, but now that doesn't even work. To the best of my knowledge, the only way to resolve this is to move to production. Which IMHO is bad because you have to test the tracking part of your application until you move to production.

999999999999 worked for me, but I think I am already in production environment.

Related

Is there any Way to Not Get our App Blocked by Twitter

I have coded a Fine bot which Tweets every 150 seconds time.sleep(150) . I have made a APP from twitter with Read / Write Permissions . But after 30 Tweets, Twitter Blocks the Application. So is there any way to Bypass it. ? Or has someone ever Tried bots in Twitter. Their are some grammar Bots, RT's Bot in twitter which has almost 110k Tweets and they tweet every 30 seconds .. How do they bypass the Frame Limit Protection
Specific Error Restricted from performing write actions and Code Stops.
The docs at least don't specify the rate limit rules, only that there's one. But it does state that you can not have duplicate texts. Is that the case possibly? What HTTP Error are you receiving? Since they don't explicitely post the rules that apply to the rate limit, I'd suppose they might have internal algorithms to tell if it's bot-like behaviour, which I'm sure they do not want to allow. Especially if you set up a new app, regulations might be more strict.
Edit:
If you're completely certain that you're not firing up too many requests at once (e.g. check with fiddler to make sure), then twitter suggests to get in touch and check the email address of the associated account for any mail from the operations team in order to resolve possible misinterpretations.
This also might be useful: API developers: abuse prevention and security
I faced the exact same problem with my twitter bot. I basically made a bot to auto-reply to a specific user and ran that script on cron of 1 minute. Got blocked even though my replies weren't spam but AI generated replies from https://qnamaker.ai/
My script is a bit heavy so it is not feasible for me to keep it running 24*7 as it'll create overhead on server system. But since you keep running it(picked that up because of the delay function), one thing you can do is keep changing the time interval in which your bot makes a tweet. Make it random so that it doesn't look scheduled and bot-like.
You can use the random class in python for that.
import random
y=random.randint(100,150)
time.sleep(y)
This way your code resumes after random intervals of time. This prevented my bot from getting blocked for a long time. Hope this helps.
Your bot seems to be banned, their algorithms probably flaged your posting as similar to spam accounts.
You should contact Twitter and explain the situation, using the following page:
Twitter API Policy Support, give them the details of why they should allow your bot to act as you wish.

Different error responses when using the JIRA REST API in two instances

We have two jira installations at our company. One that we use for our projects and a second one for testing purposes.
I'm working in a project that needs to use the JIRA REST API. For this purpose I'm connecting to our testing instance.
The problem is that while trying out the REST API, I keep getting 400 errors without a single explanation of what went wrong. I just get an HTML with
Your browser sent a request that this server could not understand
I was a bit desperate and decided to try it into our real JIRA. To my surpirse the same request gave me a different response:
{"errorMessages":[],"errors":{"project":"project is required"}}
In this case, I do get a meaningful error!
I replicated this easily. I would never get a meaningful error from the test instance, but the real one will always give me one.
I cannot keep trying out stuff in our productive JIRA, but I cannot easily continue working without getting meaningful errors. So, what could be wrong in the testing instance? I could not find any configuration about the 'verbosity' of the API responses.
I believe that this error is returned not by JIRA but rather by proxy web server that is part of you production configuration.
I suggest you to compare HTTP headers that are sent with working requests from your browser with headers you pass via curl. Googling for the "Your browser sent a request that this server could not understand" helps too

getting random 404 errors using Valence

When I make API calls to the server, I'm getting 404 errors for various data -- grades, role IDs, terms -- that I won't get on the next time I call it. The data's there on the server, viewable by the same user, and is often returned successfully, but not every time. The same user context will return data successfully for other calls.
Any ideas what could be causing this?
I'm using the Valence API with the Python client library and our 9.4.1 SP18 instance of Desire2Learn in a non-interactive script.
more detail: the text it returns on the bad 404s is " ErrorThe system cannot find the path specified."
It would help enormously to gather data about your case: packet traces that can show successful calls from your client alongside unsuccessful calls, in particular, would be very useful to see. If you are quite certain (and I see no reason you shouldn't be from your description) that you're forming the calls in the right way each time you make them, then the kind of behaviour you're noticing would seem to speak to some wider network or configuration issue: sometimes your calls are properly getting through the web service layer, and sometimes they are not -- this would seem therefore not to be down to the way you're using the API but in the way the service is able to receive that request.
I would encourage you, especially if you can gather data to provide showing this behaviour, to open a support incident with Desire2Learn's help desk in conjunction with your Approved Support Contact, or your Partner Manager (depending on whether you're a D2L client or a D2L partner).

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Is it okay for my online store to send order confirmation emails via Gmail synchronously?

When a user completes an order at my online store, he gets an email confirmation.
Currently we're sending this email via Gmail (which we chose over sendmail for greater portability) after we authorize the user's credit card and before we show him a confirmation message (i.e., synchronously).
It's working fine in development, but I'm wondering if this will cause a problem in production. Will it require making the user wait too long? Will many simultaneous Gmail connections get us in trouble? Any other general caveats?
If sending the emails synchronously will be a problem, could someone recommend an asynchronous solution (is ar_mailer any good?)
The main issue I can think of is that Gmail limits the amount of email you can send daily, so if you get too many orders a day it might break.
As they say :
"In an effort to fight spam and
prevent abuse, Google will temporarily
disable your account if you send a
message to more than 500 recipients or
if you send a large number of
undeliverable messages. If you use a
POP or IMAP client (Microsoft Outlook
or Apple Mail, e.g.), you may only
send a message to 100 people at a
time. Your account should be
re-enabled within 24 hours. "
http://mail.google.com/support/bin/answer.py?hl=en&answer=22839
I would recommend using sendmail on your server in order to have greater control over what's going on and don't depend on another service, especially when sendmail is not really complicated to set up.
The internet is not as resilient as some people would have you believe, the link between you and GMail will break at some point or GMail will go offline causing the user to think that they have not paid sucessfully.
I would put some other queue in place, sendmail sounds acceptable and you can't create your site now for where it 'might' be hosted in the future.
Ryan
If the server waits for the email to be sent before giving the user any feedback, were there problems connecting to the mailserver (timeouts, server down etc) the user request would timeout too and he wouldn't be told anything about the status of his order, so I believe you should really do this asynchronously.
Also, you should check whether doing that is even allowed by GMail's TOS. If that's not the case, you may check if that's allowed if you purchase one of their subscriptions. Also, there's surely a limit to the number of outgoing emails you may send within a given timeframe so if you're expecting your online store to be successful, you may hit that limit and bump into some nasty issue. If you're not self-hosting the site, you should check whether your host offers email servers (several plans include them for free) as then using your host's ISP would be the most obvious choice.
FACT: Gmail crashes. Not often, but it happens, and you can't control it or test it.
The simplest quick-fix is to start a separate thread or fork a subprocess to send the email. Yes, there likely will arise problems from using Gmail, and I really have no input on that vs. the alternatives. But from a design perspective, there's just no reason to make the user wait for that process to complete.
From a testing perspective, this might be where a proxy pattern might come in handy. It might be easy for you to directly invoke Gmail to send a message. Make it harder. Put in a proxy object that does the mailing for you that you can turn off (because heaven knows you can't for testing purposes make Gmail crash). Just make your team follow what happens in the event of an email malfunction by turning off the proxy and trying to complete an order. If you are doing it synchronously, then all the plagues mentioned here by other posters will rear their heads. If you are doing it asynchronously, you should be able to allow it to fail silently (from the user's perspective--from your perspective there should be enormous logging statements and text messages in the middle of the night and possibly a mild electric current arcing across the surface of someone's skin).

Resources