Sending custom HTTP error information to Flash, JavaScript, etc - asp.net-mvc

I'm developing a REST API at the moment, and one of the core features of this is that is uses a variety of HTTP status codes to return status/error information, some of which may be extended information (e.g. if an item is not found, some other similar items) which will be in the response body.
This is fine until you get to 'crippled' clients like Flash and JavaScript which can't access the response body or headers unless the HTTP status code is 200 OK (even a 201 Created success code can cause Flash to fail thinking it's an error).
So my question is, is there a standard way for allowing this type of client to request that all status codes are HTTP 200, and to indicate the real status code in another way?
One solution I was thinking of is, in the pattern of the HTTP Accept-* family of headers, using an X-Accept-Status extension header to specify which status codes can be handled, e.g. Flash would send...
X-Accept-Status: 200
...and then any status code not in this list would be mapped to one that is, and the error returned in the response body, possibly with another extension header indicating the real status code, e.g.
X-HTTP-Status-Code: 404 Not Found
This all seems a bit horrible, and working against the protocol, but if you have clients that cannot use the protocol property then that's unavoidable. I'm just looking for something a bit like X-HTTP-Method-Override (which is a 'standard' way of working around the protocol for clients that cannot send PUT/DELETE requests) but for clients that cannot understand status codes.

well, actually the problem with HTTP and REST is, that REST is a really good idea, and HTTP describes a really good implementation of it ... but really, many clients and servers only implement part of HTTP ...
i don't think HTTP is a must ... still, REST is a good idea and RESTfulness of a system is a powerful property ... so why not use HTTP as a stupid transport layer for a RESTful system?
this is what you are doing, although in my opinion, you are holding on a bit too much to HTTP and all it's theoretically built-in features ... do you really need to transport the information in a status code?
don't depend so much on your transport protocol/layer ... have a clear idea in mind, how your service should work ... seperate the protocol semantics from its implementation ... on both client and server ... abstract your RESTfulness and status codes too (make them more then just integers ... make it enums, or objects ... exceptions, why not?)...
and then plug-in protocols/transport layers at will ...
make a standard HTTP implementation
make a hacky one, using the solution you described (which to me seems perfectly valid ... if people are using technologies unable to use the standards, why should you bother too much finding the most standard-conform solution)
make whatever you have the time to do, and your server is able to do, binary, JSON, XML ... whatever seems adequate ...
two technical notes, though:
flash player does it's HTTP traffic over the browser ... and it simply does not get the status codes from the browser ... well it depends on the browser in fact ... the specs say, it does not work for: "Netscape, Mozilla, Safari, Opera, and Internet Explorer for the Macintosh." ... so IE for windows should be working? Chrome? I don't know ... but i think, it doesn't matter, since obviously, you cannot rely on it ... oh, and to state the most obvious: JavaScript also does its HTTP over the browser, of course ... so same problem here ...
for both this implies, that if you would succeed in finding something like X-HTTP-Method-Override for response, that is built in the protocol, a good browser would understand that, and would remap things accordingly, before deciding which information to give to JavaScript or 3rd-party plugins ... so you'd end up with nothing again ... i guess ...
you should simply choose your response method based on the client ... and maybe the client should send some extra info, if it is unable to use the HTTP standard ... otherwise throw at it, what follows the standard ... i'd first make an implementation using standard HTTP, yet hiding the HTTP itself away, and once everything works, write one using
greetz
back2dos

Am I wrong for thinking that one shouldn't let a crippled out-of-the-box potential client to the API dictate the features of the API implementation? I guess practical considerations win the day, but in general I guess my vote is in favor of building API implementations "properly" and requiring custom client-side programming as needed.

Bit late for that response, but...
When I implemented a flash client API with an early version of OpenRasta, I had X-ResponseLine that contained the response code and text, on each outgoing request.
As headers are by default only generic headers, they have no involvement in caching, so no reason to have an Accept / Vary on this.

Related

POST with TIdHTTP hangs on retrieving the JSON response

This question is maybe more a tip for people to search a solution if they have the same problem (as I found the solution eventually).
I had an application that does some HTTP requests with a local server (a mix of GET/POST with JSON content in the request/response bodies). The server is a third-party application, and after I upgraded it to a recent version, my Delphi app was no longer working.
It turned out that it was now hanging on the statement:
IdHTTP.Post("URL", "Payload", "BytesStreamResult");
As a manual POSTMAN request was still working, it had to be on the Delphi client side.
Further isolating the issue showed that the HTTP POST request did get an HTTP 200 response with valid HTTP response headers, but then was getting stuck reading the response body. It was hanging on:
IOHandler.ReadLn
When I compared the headers with the POSTMAN response, I noticed that 'Transfer-Encoding: chunked' was missing in the Delphi response.
Finally, I noticed the code related to TIdHTTP's hoKeepOrigProtocol option, which is not set by default.
So, my POST request was "downgraded" to an HTTP 1.0 request, and I guess this now made the (updated) server to respond differently (I'm not an RFC expert, but I guess 'chunked' is maybe an HTTP 1.1 option only).
After setting this option, everything worked like before (and indeed, the response was now read as "chunked" in Delphi).
Summary:
Shouldn't hoKeepOrigProtocol be the default option? (why punish good citizens for those that are not...)
Can we intercept this? Now my POST is assuming upfront a streamed response and thus it hangs because the server doesn't write anything to the buffer.
What would that high-level code look like? As it seems a mix of interpreting the header response headers and then deciding if more response reading is required.
(it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes...)
TIdHTTP supports non-chunked responses (which yes, is an HTTP 1.1 feature), so the hanging would have to be caused by the server sending a malformed response (a bug that should be reported to the server author).
When reading a non-chunked and non-MIME response, TIdHTTP does not use IOHandler.ReadLn to read the response's body, as you claim. Only when reading the response's headers.
But, since you did not show what the response actually looks like, nobody can explain for sure exactly why the hang occurs.
Shouldn't hoKeepOrigProtocol be the default option?
At the time the option was first introduced, no. There were plenty of buggy HTTP 1.1 servers around that downgrading to HTTP 1.0 was warranted.
However, that was many years ago. Nowadays, HTTP 1.1 is much more mature, and such buggy servers are rare. So, feel free to submit a change/pull request to Indy's GitHub repo if you feel the default behavior should be changed.
Can we intercept this?
No. The behavior you describe is most likely caused by a bug in the HTTP server. Either it is not sending all of the data it should be, or else the response is likely malformed in a way that makes TIdHTTP expect more data than is actually being sent. Either way, all you can do is assign a non-infinite timeout to TIdHTTP.
it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes.
Indy is designed to use infinite timeouts by default. You can assign custom timeouts to TIdHTTP's ConnectTimeout and ReadTimeout properties.
Setting this prevent the HTTP protocol downgrade:
IdHTTP.HTTPOptions := IdHTTP.HTTPOptions + [hoKeepOrigProtocol];
This is, of course, dependant upon how the server processes the protocol specification, and if it results in issues or not.

Fn project is missing http operations (CRUD)

I have spent my afternoon getting very excited about the container-native serverless platform 'fn project' - http://fnproject.io/.
I love the idea of the FaaS model but have no intention of locking myself into a particular cloud vendor for most of the lifetime of an app - and several other reasons including the desire to spin up the entire app on a small server anywhere if I choose.
fn project seems great for my needs until I finish perusing the documentation and all the relevant blog posts and suddenly think 'what? Wait....what??? Where are the http operations?'.
I cannot find a single reference anywhere that states if it is even possible to to have http triggers for different http operations (ie POST, PUT, PATCH, DELETE), let alone how I would do it.
I want to build REST api's (or certainly at the very least json-serving http-based RPC apis - if it doesn't have hypermedia links it isn't REST ;) but let's not get into that one in this thread)
Am I missing something here (certainly the correct bit of documentation)??
Can anybody please enlighten me as to how I would do this, or even tell me if I have totally misunderstood what I should use this for?
My excitement has gone soft for now but I'm hoping someone that will change with the right information.
It feels odd that I can't find anyone else complaining about this, so I think that indicates my misunderstanding perhaps.
Other solutions such as OpenFaaS look interesting but I dont wan't to have to learn how to deploy kubernetes and docker swarms if I can avoid it :)
I'm not an expert, but as of now it seems not possible to specify the http method inside the trigger. Check latest trigger spec : as you can see, there is no notion of http method here.
However, handling different HTTP methods can be done inside the function itself.
For example, in Java (with fdk-java v1.0.80), you can use com.fnproject.fn.api.httpgateway.HTTPGatewayContext as the first parameter of the function, as described in the section "Accessing HTTP Information From Functions" of the documentation :
In Fn for Java, when your function is being served by an HTTP trigger (or another compatible HTTP gateway) you can get access to both the incoming request headers for your function by adding a 'com.fnproject.fn.api.httpgateway.HTTPGatewayContext' parameter to your function's parameters.
Using this allows you to :
...
Access the method and request URL for the trigger
...
You can then retrieve the HTTP method by calling getMethod() on the HTTPGatewayContext passed as parameter.
In other languages (with others fdk), it's possible to do the same :
in Go : example calling RequestMethod() on context
in Ruby : class HTTPContext
in Python : class HTTPGatewayContext
in Node : class HTTPGatewayContext
From this different contexts, you'll then be able to get method parameter passed when fn invoke --method=[GET|POST|...] (via fn-http-method header).
The main drawback here is that all HTTP methods should be handled in the same function. Nonetheless, you can structure your code to have only one class per method.
After some further thought it seems fairly clear now what my actual misunderstanding was....
When I have built Serverless framework services in the past (or built and deployed Lambda functions using terraform) I have been deploying to AWS and so have been using AWS's API Gateway offering (their product is actually called API gateway but its important to recognise that API Gateway is a distributed systems / micro-sevices design pattern).
API gateway makes it possible to route specific http request types including the method (GET,POST,PUT,DELETE) to the desired functions.
Platforms such as Fn project and OpenFaaS do not provide an out of the box api gateway solution and it seems we would need to take care of this ourselves.
These above mentioned platforms are about deployment of functions. We find the other bits via our product of choice.

An working example of HttpResponse.PushPromise() in MVC Applications

I've read about push-promise in HTTP/2 specs and several other tutorials, and have an idea as a concept.
I've read here in SO why bundling won't be as relevant in upcoming days. So, if I have to incorporate push promise into applications, where is the ideal place to do this. Should it be just before redirecting to the view from the Action method? Or, in the script in the view? As far as I've searched I couldn't find any examples.
Please someone share their experience implementing in the real code. Does it seem like an overhead if you have to support both the protocols?
Also, if I'm using IIS 10, then is there any configuration changes that I should do to support both protocols? [As far as I've read, we don't have to. But always better heed to some experts.]
So, if I have to incorporate push promise into applications, where is the ideal place to do this. Should it be just before redirecting to the view from the Action method? Or, in the script in the view?
I did it in the controller action method while experimenting, but if you have common resources you may want to move it somewhere more fundamental/shared in the pipeline. Anywhere that has access to the HttpResponse object should work. As I noted here, you'll want to use the PushPromise overload that takes in an HTTP method and headers if what you're pushing will vary based on any request headers, e.g. accept-encoding (compression).
Does it seem like an overhead if you have to support both the protocols?
Also, if I'm using IIS 10, then is there any configuration changes that I should do to support both protocols?
You do not need to do anything explicitly to support both protocols; IIS will take care of it. Per David So of Microsoft, "provided the client and server configuration supports HTTP/2, then IIS will use HTTP/2 (or fallback to HTTP/1.1 if not possible)". This is true even if you're using server push: "If the underlying connection doesn’t support push (client disabled push, or HTTP/1.1 client), the call does nothing and returns success, so you can safely call the API without needing to worry about whether push is allowed."
Incidentally, if you want to disable HTTP/2 on Windows Server 2016, you can do so via the registry.
In addition to checking IIS logs, as David So suggested, you can verify HTTP/2 is being used by right-clicking on the headers row (Name, Status, Type, etc.) in Chrome's Network tab and checking off "Protocol"; you'll see "h2" for HTTP/2 responses. You can verify push promises are working by looking at the Chrome HTTP/2 internals page (chrome://net-internals/#http2) and looking at the "Pushed" and "Pushed and claimed" columns for your domain.

How to I access a SoundCloud public stream?

How do I play a track from a SoundCloud URL, which, for example, I got from the xml response from a query
<stream-url>https://api.soundcloud.com/tracks/31164607/stream</stream-url>
I should have thought that it would have been as easy as:
https://api.soundcloud.com/tracks/31164607/stream&client_id=my_client_id
yet I get
<error>401 - Unauthorized</error>
All I want to do is consume it in a Silverlight MediaElement, so all I need is set some url to the MediaElement's Source property.
I've checked an application that I wrote about 2 years ago, and THEN, accessing the stream url was as easy as this for a public track:
http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY
however this no longer seems to work.
For example, all I had to do then in C# was:
MediaElement me = new MediaElement();
me.Source= new Url("http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY");
me.Play();
Any hints would be appreciated.
I had a reply on a Microsoft forum that seems to imply that SoundCloud might not be possible to stream to Windows 8 Metro devices without consuming the whole stream before playback starts - which is quite worrying and would seem to imply that to make authentication possible, it would have to be done entirely in the url querystring insterad of using the header:
(The following reply is the answer to the following question: 'I am able to access an audio stream by http using the MediaElement, however I need to access it via https in which I need to add the oAuth info to the header of the initial request.
How is this done when using a MediaElement, and if it cannot be done, what is the workaround for consuming an audio feed in Metro 8 that requires header authentication to stream?')
"Direct access to the underlying network stream is not currently permitted by the MediaElement. Because of this there is currently no way to modify the header of the HTTP request to include any additional authentication information. That said, you do have control over the URL. You could theoretically setup an HTTP proxy service that translated the HTTP GET request parameters into the necessary oAuth credentials. Keep in mind that this is just a theoretical workaround. You may find different behavior in practice. Another theoretical workaround would be to handle the oAuth yourself via a raw stream socket and pass the retuned media data to the MediaElement via "Set Source" and a "Random Access Stream". Please keep in mind that this method has major limitations. in order to use a "Random Access Stream" with the ME you need to make sure all of the data is available before passing it to the ME."
The proxy service is not scalable for an application that is merely distributed for free as every stream would need to come via the proxy. And the raw stream socket, although getting around this, would mean that playback could not start until the whole file had downloaded - and this goes against all current UX (User Experience) guidelines.
So once again, if anyone has any tips, or info about how the whole authentication thing can be achieved in a querystring instead of using headers, I'd appreciate it!
I'm a little confused about whether you're referring to a public or a private track? If it's a public track, then you shouldn't need to send any authentication information, just your client id.
When I request https://api.soundcloud.com/tracks/31164607/stream?client_id=YOUR_CLIENT_ID then I get a 302 redirect to the proper mp3 stream.
Remember, adding parameters to a URL must start with a ? not &. This could (more than likely) be the reason why you are getting a 401 (SC is not picking up the client_id).
After authentication the link like this
http://api.soundcloud.com/tracks/103229681/stream?consumer_key=d61f17a08f86bfb1dea28539908bc9bf
is working fine. I am using Action Script.
I'm following up on Tom's reply because he calls attention to url character specificity. My HTTP requests randomly started failing today, and I was prefacing my client_Id with a ?. As soon as I changed that single ? to &, it started working. So in my case, SC wasn't picking up my client_Id because I used the wrong character. I think depending on where in the request we're talking about specifically, it's worth noting that differences between ? and & do make a difference.

Delphi - Connecting and logging in to a webpage

EDIT
There has been quite a development. The current problem is this:
I compared requests sent from a browser and sent from my app. There have been some differences and I managed to correct most of them. Some are still unfixed, since I haven't figured it out how yet. I am using INDY.
How can i send (or add) cookies into the request?
I tried this:IdHTTP.CookieManager.AddCookie('bakatheme=BrectanTheme',IdHTTP1.URL) but it doesn't work. Also, in INDY help they say that it is supposed to be AddCookie(String, String), but my Delphi only accept (String, TIdURI) - I am not sure if it is the right URI I am calling.
In the Headers I have this code: AcceptEncoding:='gzip,deflate,sdch'; yet when I parse the outgoing request, it states this: AcceptEncoding:gzip,deflate,sdch,identitybut I am certain I don't have "identity" anywhere in the code.
Those are the two things in which my request differs from the browser's. Now, I am getting a 500 Internal Server Error at return, can it be caused by the lack of cookies or the second thing?
Thank you very much.
Haven't exactly tried it myself but here's an example I found about website login using indy
http://www.ciuly.com/delphi/indy/persistent-login-example-for-geocacheing-no-ssl/
Ok. Lets comment:
How can i send (or add) cookies into the request?
You should not do that. Indy handles this for you (but if really want to, there is a TidCookieManager). But it seems to me that you dont know how cookies work. Its not a thing you can add to a request. It cames from the server and it identifies you.
In the Headers I have this code: AcceptEncoding:='gzip,deflate,sdch';
AcceptEnconding tells to the server that it can compact the response using those algorithms. Indy supports gzip,deflate,sdch,identity and indy is updating que header request to add the one you forgot to put.
You should take a look at those links to learn how http works:
W3
Wikipedia

Resources