Different error responses when using the JIRA REST API in two instances - jira

We have two jira installations at our company. One that we use for our projects and a second one for testing purposes.
I'm working in a project that needs to use the JIRA REST API. For this purpose I'm connecting to our testing instance.
The problem is that while trying out the REST API, I keep getting 400 errors without a single explanation of what went wrong. I just get an HTML with
Your browser sent a request that this server could not understand
I was a bit desperate and decided to try it into our real JIRA. To my surpirse the same request gave me a different response:
{"errorMessages":[],"errors":{"project":"project is required"}}
In this case, I do get a meaningful error!
I replicated this easily. I would never get a meaningful error from the test instance, but the real one will always give me one.
I cannot keep trying out stuff in our productive JIRA, but I cannot easily continue working without getting meaningful errors. So, what could be wrong in the testing instance? I could not find any configuration about the 'verbosity' of the API responses.

I believe that this error is returned not by JIRA but rather by proxy web server that is part of you production configuration.
I suggest you to compare HTTP headers that are sent with working requests from your browser with headers you pass via curl. Googling for the "Your browser sent a request that this server could not understand" helps too

Related

Handling webhook calls in ASP.NET MVC application, implementation and testing

I have inherited this old ASP.Net MVC application and now I have to integrate a new Payment Service Provider. This PSP returns the responses to all payment requests via a POST message. The fields of the response are sent as hidden fields. The URL of the webhook must be provided in the request. After reading different SO posts along the same lines, I kinda know, in principle, what I would have to do. The PSP has a test & integration environment so I can send test request from my development environment. The problem I am facing is how to read the response. The URL for the webhook must be public, so me, running my development environment on my localhost, won't help. I can use webhook.site to get the responses, but I would like to get them in my application, so I check that my code behaves correctly and can handle properly the POST message containing all the hidden fields. Did anyone manage to find a possible solution for this problem?
TIA,
Ed
You mention that you want to test if you code behave correctly, I also see tag asp.net mvc in the post. For this requirement I can suggest you to go with azure. Azure have remote debugging feature which can easily help you test your code.
If you don't want to go with Azure, you can use ngrok software to make your localhost url public. This way you can trigger your code and debug it locally like remote debugging in azure.
There is one more way to test this thing very well. Just log the whole body of webhook (in logs or DB) and later execute it yourself through postman. This might doesn't let your webhook endpoint know the result at same time.

Mock API Requests Xcode 7 Swift Automated UI Testing

Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.

getting random 404 errors using Valence

When I make API calls to the server, I'm getting 404 errors for various data -- grades, role IDs, terms -- that I won't get on the next time I call it. The data's there on the server, viewable by the same user, and is often returned successfully, but not every time. The same user context will return data successfully for other calls.
Any ideas what could be causing this?
I'm using the Valence API with the Python client library and our 9.4.1 SP18 instance of Desire2Learn in a non-interactive script.
more detail: the text it returns on the bad 404s is " ErrorThe system cannot find the path specified."
It would help enormously to gather data about your case: packet traces that can show successful calls from your client alongside unsuccessful calls, in particular, would be very useful to see. If you are quite certain (and I see no reason you shouldn't be from your description) that you're forming the calls in the right way each time you make them, then the kind of behaviour you're noticing would seem to speak to some wider network or configuration issue: sometimes your calls are properly getting through the web service layer, and sometimes they are not -- this would seem therefore not to be down to the way you're using the API but in the way the service is able to receive that request.
I would encourage you, especially if you can gather data to provide showing this behaviour, to open a support incident with Desire2Learn's help desk in conjunction with your Approved Support Contact, or your Partner Manager (depending on whether you're a D2L client or a D2L partner).

status code 500 internal server error in LoadRunner

I have a web application which i need to be load tested using LoadRunner. When I record the website using vugen it works good and there is no any application bug. But when I tried to replay the script, script failed after login and while navigating to next page, say, Transaction. At the end of log, I receive error:
Action.c(252): Error -26612: HTTP Status-Code=500 (Internal Server Error)
for "http://rob.com/common/transaction
Please help me to resolve this error.
LoadRunner generates HTTP request just as your browser does, this error is the same error you would get if you would go to that URL using your browser. Error code 500 is a generic server error that is returned when there is no better (more specific error to return).
Most likely the login process requires some form of authentication which is protected against a replay attack by using some form of token. It is up to you to capture this token using Correlations in LoadRunner and replay it as the server expects. The Correlation Studio in VuGen should detect and identify the token for you but since authentication methods vary it is sometimes impossible to do this automatically and you will have to create manual correlation. Please consult the product documentation for more details on how to do it. If your website is publicly available online then post its URL and I will try to record the script on my machine.
Thanks,
Boris.
Most common reasons
You are not checking each request for a valid result being returned and using a 200 HTTP status as an assumed correct step without examining the content of what is being returned. As a result when data being returned is incorrect you are not branching the code to handle the exception. Go one to two steps beyond where your business process has come off the rails with an assumptive success and you will have a 500 status message for an out of context action occurring 100% of the time.
Missed dynamic element. Record three times. Compare the code. Address the changing components.

FedEx Track Web Service isn't recognizing any tracking numbers

I'm trying to use the FedEx API to track packages. I can authenticate to their test server successfully (using my user credentials, account number, and meter number). However, I receive the same unhelpful response for most tracking numbers that I use in my requests; both test tracking numbers (like 999999999999) and real tracking numbers (that work well on the FedEx website) return the following:
Error Code 9040.
No information for the following shipments has been received by our system yet. Please try again or contact Customer Service at 1.800.Go.FedEx(R) 800.463.3339.
The only requests that fetch a different response are the clearly invalid ones, like "test", which returns:
Error Code 5508.
Invalid tracking number.
I tried SOAP requests using their wsdl (TrackService_v5) as well as manual non-SOAP HTTP POST requests, but their responses are exactly the same in both cases. Is something wrong on their side, or am I doing something wrong?
It seems that FedEx has disabled any test tracking numbers, in the past 999999999999 would work just fine, but now that doesn't even work. To the best of my knowledge, the only way to resolve this is to move to production. Which IMHO is bad because you have to test the tracking part of your application until you move to production.
999999999999 worked for me, but I think I am already in production environment.

Resources