We have an API published on WSO2. It works perfectly.
When I send my request like the picture below, it responses 200 as I wanted:
I just wanted to test my request by adding more deleted=false query. So, I can send request until the request's size is 5.75 KB. I see stil 200 OK nicely. You can see on picture below:
But, if I reach request size 5.76KB by adding 1 more deleted=false query, I see this error:
As I searched on internet, I see that the REST API supports Uniform Resource Locators (URLs) with a length of up to 6000 characters.
My question is, how can I extend this limit? Is there any way to do that ?
As per the shared screenshot, it seems the Backend itself is responding back with a 400 Bad Request status code. The API Manager doesn't have any restrictions on large query parameters in the URI. So, this error is coming up from your actual Backend service, which is not able to handle a large request.
To confirm this behavior, you can enable the WIRE logs in the API Manager server and troubleshoot the behavior. If the request is dispatched to the Backend and the Backend is responded with 400 Bad Request means, the Backend is only capable of handling requests up to 5.75 KB in your case.
Also, as an alternate check, you can also try invoking the actual Backend service URL from the Postman (direct invocation and not via WSO2) and verify the behavior with large requests.
Given below are few documentations related to enabling WIRE logs and understanding the WIRE logs
WSO2 API Manager v3.1.0: Enable WIRE Logs
WSO2 API Manager v2.6.0: Enable WIRE Logs
How to read and understand WIRE Logs
Related
Auth 2.0.
"code" parameter is required to perform
POST /.../oauth2/v2.0/token
with code value.
In Fiddler code value could be found in Location header of response to /kmsi request:
However, here is no Location header in JMeter for the same request:
Why? Are there any tip to get Location header in JMeter too?
If you're seeing different response it means that
Either you're sending a different request. In this case inspect request details from JMeter and from the real browser using a 3rd-party sniffer tool like Fiddler or Burp, identify the inconsistencies and amend your JMeter configuration so it would send exactly the same request as the real browser does (apart from dynamic values which need to be correlated)
Or one of the previous requests fails somewhere somehow, JMeter automatically treats HTTP responses with status codes below 400 as successful, it might be the case they are not really successful, i.e. you're continuously hitting the login page (you can check it by inspecting response data tab of the View Results Tree listener). Try adding a Response Assertions to the HTTP Request samplers so there will be another layer of explicit checks of the response data, this way you will get confidence that JMeter is doing what it is supposed to be doing.
I am attempting to use the REST api of Jenkins. Jenkins requires a POST request to a URL to delete a job. This results in the following:
I tell my chosen Client to send a POST to the appropriate URL.
The client sends a POST and authorizes itself with username and password.
Jenkins deletes the job.
Jenkins returns a "302 - Found" with the location of folder containing the deleted job.
Client automatically sends a POST to the location.
Jenkins answers with "200 - OK" and the full HTML of the folder page.
This works just fine with Postman (unless I disable "Automatically follow redirects" of course).
Jersey however keeps running into a "404" at step 5 because I blocked anonymous users from viewing the folder in question. (Or a "403" if I blocked anonymous users altogether.)
Note that the authentication works in step 1 because the job has been deleted successfully!
I was under the impression that Jersey should use the given authentication for all requests concerning the client.
Is there a way to actually make this true? I really don't want to forbid redirects just to do every single redirect myself.
To clarify: The problem is that while Jersey follows the redirect, but fails to authenticate itself again, leading to the server rejecting the second request.
Code in question:
HttpAuthenticationFeature auth = HttpAuthenticationFeature.basicBuilder()
.credentials(username, token)
.build();
Client client = ClientBuilder.newBuilder()
.register(auth)
.build();
WebTarget deleteTarget = client.target("http://[Jenkins-IP]/job/RestTestingArea/job/testJob/doDelete")
Response response = deleteTarget.request()
.post(null);
EDIT: The "302-Found" only has 5 headers according to Postman: Date, X-Content-Type-Options ("nosniff"), Location, Content-Length (0) and Server. So neither any cookies nor any tokens that Postman might use and Jersey disregard.
Question loosely related to this one - if I were able to log the second request I might be able to understand what's happening behind the scenes.
EDIT2: I have also determined that the problem is clearly with the authentication. If I allow anonymous users to view the folder in question, the error disappears and the server answers with a 200.
I found the answer with the help of Paul Samsotha and Gautham.
TL;DR: This is intended behavior and you have to set the System property http.strictPostRedirect=true to make it work or perform the second request yourself.
As also described here, HttpURLConnection decided to not implement a redirect as it is defined in the HTTP standard but instead as many browsers implemented it (so in laymans terms, "Do it like everyone else instead of how it is supposed to work"). This leads to the following behavior:
Send POST to URL_1.
Server answers with a "302 - Found" and includes URL_2.
Send GET to URL_2, dropping all the headers.
Server answers with a "404 - Not Found" as the second request does not included correct authentication headers.
The "404" response is the one received by the code, as steps 2 and 3 are "hidden" by the underlying code.
By dropping all headers, the authentication fails. As Jersey uses this class by default, this lead to the behavior I was experiencing.
I am developing a REST API in Rails.
The API returns an HTTP 422 unprocessable entity with error messages when model validations fail.
However, a model can have several validations and I want to delegate the translation of the error messages to the API consumer and that is why it needs to differentiate what was the specific cause for the server to return a 422.
I was thinking about using subcodes, just like Facebook does in its API. Is there a way to do this keeping the REST practices?
Also, what does one do when an error 422 occurs for multiple causes at the same time?
RFC 7231
Client Error 4.x.x
Except when responding to a HEAD request, the server SHOULD send a representation containing an explanation of the error situation, and whether it is a temporary or permanent condition.
Normally, you should encode information that is specific to your domain in the message-body of the response. The status line and response headers are there for generic components (browsers, caches, proxies) to have a coarse understanding of what is going on.
The Problem Details specification lays out the concern rather well.
consider a response that indicates that the client's account doesn't have enough credit. The 403 Forbidden status code might be deemed most appropriate to use, as it will inform HTTP generic software (such as client libraries, caches, and proxies) of the general semantics of the response.
However, that doesn't give the API client enough information about why the request was forbidden, the applicable account balance, or how to correct the problem. If these details are included in the response body in a machine-readable format, the client can treat it appropriately; for example, triggering a transfer of more credit into the account.
I don't promise that Problem Details is well suited for your purposes; but as prior art it should help you to recognize that the information you want to communicate belongs in the body of the response, with a suitable Content-Type header to inform the consumers which processing logic they need to use.
I am currently working on an application that makes requests using the .NET SDK for the Microsoft Graph API. Specifically to retrieve information about users and their OneDrives.
Microsoft throttles API requests by returning an HTTP 429 status code, and I have implemented a back-off using the Retry-After header. I have noticed however that I seem to be getting throttled after only a handful of requests.
I have also been using Microsoft Graph Explorer to test some of my API calls and have noticed that I never seem to get a 429 response when accessing the API via that method. After also seeing reports of people having issues with the OneDrive client on Linux that they managed to work around by changing their User-Agent header, I thought maybe I needed to set a User-Agent for my requests.
The result is that it seems that if I set the User-Agent header to be something like Mozilla/5.0 then all the throttling issues seem to disappear. I have searched high and low and so far haven't managed to find any documentation on what a valid User-Agent should be, and I would prefer to avoid making my app impersonate a browser, so I wondered whether there was any guidance or documentation around that I might have missed?
For instance, a User-Agent of Mozilla/5.0 seems to result in no throttling, but MyApp/1.0 results in throttling.
There isn't any guidance on the User-Agent header and to be honest, I'm not sure why this would have any effect on throttling.
Throttling in Microsoft Graph is handled by the underlying service you're interacting with. For example, the /notes/ endpoint throttling is governed by OneNote while /messages is governed by Exchange.
In most cases, OneDrive will throttle by number of concurrent requests, per app, per user. So using Delegated permissions, your app should generally be able to concurrently upload 4 files without a problem. Any more than that and you will start to see 429 responses.
there are some best-practices from Microsoft to handle throttling:
https://learn.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling
you should decorate your http user-agent as following:
NONISV|CompanyName|AppName/Version
Identify as NONISV and include Company Name, App Name separated by a pipe character and then adding Version number separated with a slash character
or
ISV|CompanyName|AppName/Version
Identify as ISV and include Company Name, App Name separated by a pipe character and then adding Version number separated with a slash character
I am making Google API request through application using RestClient library to get address.
Sample request code-
require 'rest-client'
require 'json'
gmaps_api_href = "https://maps.googleapis.com/maps/api/geocode/json?latlng=18.56227673,73.76804232&language=ar"
response = RestClient.get gmaps_api_href
result = JSON.parse(response)['results']
This request works fine on my local machine and it completes within 1-2secs. But on production instance it takes 20secs to finish one request.
Due to some security measures, we can not access production instance directly. So I am unable to find pin point for this delay.
After doing trial and error, we found that
If we make request using CURL, it takes 1 sec on server
If we make request using Net::HTTP, it takes 20sec to complete same as we were observed for RestClient.
If we make request using WebRequest in small .net app, that request complete within 1 secs.
Its difficult for me to get difference between above observations.
Please let me know why it is so? and what changes I have to do to make it work in my Rails App?
Are you using a Google API key? Your example does not show use of an API key. if not, I'd guess you are getting rate-limited by Google. On your server, you've probably already deployed a version of this app, which made lots of requests to Google without an API key in the fairly recent past, and Google noticed and it's rate-limiting software may be slowing down your requests made from that server. While your local machine hasn't in the past made an enormous amount of requests to the google api, so is not being rate-limited by google's servers.
It's possible Google's rate-limiting is paying some attention (for now!) to User-Agent, and the different user-agent sent by Curl somehow evades Google's rate-limiting that was triggered by the requests sent by RestClient with it's User-Agent (and RestClient may use net-http under the hood, and have the same User-Agent as it).
While one would hope that if you were rate-limited you'd get a "429 Too Many Requests" error response instead of just a slow response, it's possible RestClient hides this from you (I haven't used RestClient), and I've also seen some unpredictable behavior from Google rate-limiting defenses, especially when not using an API key on a service that requires one for all but a few sample requests. I have seen things similar to what you report in that case.
My guess is you're being rate limited because you are not using an API key. Get and use an API key from Google. Google still has rate limits when you are using an API key, but they are clearly advertised (for free? 2500 per-day, and no more than 10 per second. more if you pay) and should give more clear and predictable error messages when exceeded. That's part of why Google requires the api key, so they can reliably rate-limit you in clear ways.
https://developers.google.com/maps/documentation/geocoding/usage-limits
https://developers.google.com/maps/documentation/geocoding/intro#BYB