Delphi - Connecting and logging in to a webpage - delphi

EDIT
There has been quite a development. The current problem is this:
I compared requests sent from a browser and sent from my app. There have been some differences and I managed to correct most of them. Some are still unfixed, since I haven't figured it out how yet. I am using INDY.
How can i send (or add) cookies into the request?
I tried this:IdHTTP.CookieManager.AddCookie('bakatheme=BrectanTheme',IdHTTP1.URL) but it doesn't work. Also, in INDY help they say that it is supposed to be AddCookie(String, String), but my Delphi only accept (String, TIdURI) - I am not sure if it is the right URI I am calling.
In the Headers I have this code: AcceptEncoding:='gzip,deflate,sdch'; yet when I parse the outgoing request, it states this: AcceptEncoding:gzip,deflate,sdch,identitybut I am certain I don't have "identity" anywhere in the code.
Those are the two things in which my request differs from the browser's. Now, I am getting a 500 Internal Server Error at return, can it be caused by the lack of cookies or the second thing?
Thank you very much.

Haven't exactly tried it myself but here's an example I found about website login using indy
http://www.ciuly.com/delphi/indy/persistent-login-example-for-geocacheing-no-ssl/

Ok. Lets comment:
How can i send (or add) cookies into the request?
You should not do that. Indy handles this for you (but if really want to, there is a TidCookieManager). But it seems to me that you dont know how cookies work. Its not a thing you can add to a request. It cames from the server and it identifies you.
In the Headers I have this code: AcceptEncoding:='gzip,deflate,sdch';
AcceptEnconding tells to the server that it can compact the response using those algorithms. Indy supports gzip,deflate,sdch,identity and indy is updating que header request to add the one you forgot to put.
You should take a look at those links to learn how http works:
W3
Wikipedia

Related

POST with TIdHTTP hangs on retrieving the JSON response

This question is maybe more a tip for people to search a solution if they have the same problem (as I found the solution eventually).
I had an application that does some HTTP requests with a local server (a mix of GET/POST with JSON content in the request/response bodies). The server is a third-party application, and after I upgraded it to a recent version, my Delphi app was no longer working.
It turned out that it was now hanging on the statement:
IdHTTP.Post("URL", "Payload", "BytesStreamResult");
As a manual POSTMAN request was still working, it had to be on the Delphi client side.
Further isolating the issue showed that the HTTP POST request did get an HTTP 200 response with valid HTTP response headers, but then was getting stuck reading the response body. It was hanging on:
IOHandler.ReadLn
When I compared the headers with the POSTMAN response, I noticed that 'Transfer-Encoding: chunked' was missing in the Delphi response.
Finally, I noticed the code related to TIdHTTP's hoKeepOrigProtocol option, which is not set by default.
So, my POST request was "downgraded" to an HTTP 1.0 request, and I guess this now made the (updated) server to respond differently (I'm not an RFC expert, but I guess 'chunked' is maybe an HTTP 1.1 option only).
After setting this option, everything worked like before (and indeed, the response was now read as "chunked" in Delphi).
Summary:
Shouldn't hoKeepOrigProtocol be the default option? (why punish good citizens for those that are not...)
Can we intercept this? Now my POST is assuming upfront a streamed response and thus it hangs because the server doesn't write anything to the buffer.
What would that high-level code look like? As it seems a mix of interpreting the header response headers and then deciding if more response reading is required.
(it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes...)
TIdHTTP supports non-chunked responses (which yes, is an HTTP 1.1 feature), so the hanging would have to be caused by the server sending a malformed response (a bug that should be reported to the server author).
When reading a non-chunked and non-MIME response, TIdHTTP does not use IOHandler.ReadLn to read the response's body, as you claim. Only when reading the response's headers.
But, since you did not show what the response actually looks like, nobody can explain for sure exactly why the hang occurs.
Shouldn't hoKeepOrigProtocol be the default option?
At the time the option was first introduced, no. There were plenty of buggy HTTP 1.1 servers around that downgrading to HTTP 1.0 was warranted.
However, that was many years ago. Nowadays, HTTP 1.1 is much more mature, and such buggy servers are rare. So, feel free to submit a change/pull request to Indy's GitHub repo if you feel the default behavior should be changed.
Can we intercept this?
No. The behavior you describe is most likely caused by a bug in the HTTP server. Either it is not sending all of the data it should be, or else the response is likely malformed in a way that makes TIdHTTP expect more data than is actually being sent. Either way, all you can do is assign a non-infinite timeout to TIdHTTP.
it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes.
Indy is designed to use infinite timeouts by default. You can assign custom timeouts to TIdHTTP's ConnectTimeout and ReadTimeout properties.
Setting this prevent the HTTP protocol downgrade:
IdHTTP.HTTPOptions := IdHTTP.HTTPOptions + [hoKeepOrigProtocol];
This is, of course, dependant upon how the server processes the protocol specification, and if it results in issues or not.

How can send ajax request from activex control/NPAPI and process its response?

Currently i have a wepage where I am sending a ajax request from javascript and in respose for that server is sending a video file which will be by default saved browser download location. I want the user to select download path each and every time for the file download(which can be achieved by changing browser settings which is not suited for me). So i want to include a activex object which can send ajax request and get its response. First I want to know whether is it possible, if yes is there any prototype/examples, or please let me know how it can be achieved.
It is possible; FireBreath has a mechanism called BrowserStreams that would probably work for what you're describing, but honestly I'd suggest against it. See if you can do what you need using an extension; Chrome is dropping support for NPAPI next year and even if they weren't I think it's a really bad idea to use a plugin for something like this.
Up to you, of course. There are examples for making HTTP GET and POST requests in the FBTestPlugin example in FireBreath.

synapse httpsend through proxy server

I have a routine that currently uses
httpgettext to send two urls out to google..
The first with the maps key
and the second to get some distance calculations with is returned as a JSON object...
It all worked fine but now the client wants it to go through a proxy server.
I have tried modifying code that was on synapse knowledge base but i just get a bad response...
The code looks like this that works no proxy...
buildstring:='http://maps.google.com/maps?file=api&v2&key=ASASASASASASASAS-AAAA';
httpgettext(buildstring,myoutput);
buildstring:='http://maps.googleapis.com/maps/api/directions/json?origin='+trim(start_postcode)+'&destination='+trim(end_postcode)+'&sensor=false';
httpgettext(buildstring,myoutput);
How do I get the same response but through a proxy?
The google maps key above is fake - and will not work - you need to use your own.
When I tried modifying an example the first request came back OK the second one came back with a 400 bad request.
With thanks in advance
Phil Hutchinson
I have found the issue?
I looked at the source code demos supplied and if I create a type of httpsend and put the proxy info in and send the request, the first one works.
The second request fails - so it must be something to do with the htppsend method leaving some rubbish in the type. If I destroy it and send it again it works fine.
Not the perfect solution but it works!

How to I access a SoundCloud public stream?

How do I play a track from a SoundCloud URL, which, for example, I got from the xml response from a query
<stream-url>https://api.soundcloud.com/tracks/31164607/stream</stream-url>
I should have thought that it would have been as easy as:
https://api.soundcloud.com/tracks/31164607/stream&client_id=my_client_id
yet I get
<error>401 - Unauthorized</error>
All I want to do is consume it in a Silverlight MediaElement, so all I need is set some url to the MediaElement's Source property.
I've checked an application that I wrote about 2 years ago, and THEN, accessing the stream url was as easy as this for a public track:
http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY
however this no longer seems to work.
For example, all I had to do then in C# was:
MediaElement me = new MediaElement();
me.Source= new Url("http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY");
me.Play();
Any hints would be appreciated.
I had a reply on a Microsoft forum that seems to imply that SoundCloud might not be possible to stream to Windows 8 Metro devices without consuming the whole stream before playback starts - which is quite worrying and would seem to imply that to make authentication possible, it would have to be done entirely in the url querystring insterad of using the header:
(The following reply is the answer to the following question: 'I am able to access an audio stream by http using the MediaElement, however I need to access it via https in which I need to add the oAuth info to the header of the initial request.
How is this done when using a MediaElement, and if it cannot be done, what is the workaround for consuming an audio feed in Metro 8 that requires header authentication to stream?')
"Direct access to the underlying network stream is not currently permitted by the MediaElement. Because of this there is currently no way to modify the header of the HTTP request to include any additional authentication information. That said, you do have control over the URL. You could theoretically setup an HTTP proxy service that translated the HTTP GET request parameters into the necessary oAuth credentials. Keep in mind that this is just a theoretical workaround. You may find different behavior in practice. Another theoretical workaround would be to handle the oAuth yourself via a raw stream socket and pass the retuned media data to the MediaElement via "Set Source" and a "Random Access Stream". Please keep in mind that this method has major limitations. in order to use a "Random Access Stream" with the ME you need to make sure all of the data is available before passing it to the ME."
The proxy service is not scalable for an application that is merely distributed for free as every stream would need to come via the proxy. And the raw stream socket, although getting around this, would mean that playback could not start until the whole file had downloaded - and this goes against all current UX (User Experience) guidelines.
So once again, if anyone has any tips, or info about how the whole authentication thing can be achieved in a querystring instead of using headers, I'd appreciate it!
I'm a little confused about whether you're referring to a public or a private track? If it's a public track, then you shouldn't need to send any authentication information, just your client id.
When I request https://api.soundcloud.com/tracks/31164607/stream?client_id=YOUR_CLIENT_ID then I get a 302 redirect to the proper mp3 stream.
Remember, adding parameters to a URL must start with a ? not &. This could (more than likely) be the reason why you are getting a 401 (SC is not picking up the client_id).
After authentication the link like this
http://api.soundcloud.com/tracks/103229681/stream?consumer_key=d61f17a08f86bfb1dea28539908bc9bf
is working fine. I am using Action Script.
I'm following up on Tom's reply because he calls attention to url character specificity. My HTTP requests randomly started failing today, and I was prefacing my client_Id with a ?. As soon as I changed that single ? to &, it started working. So in my case, SC wasn't picking up my client_Id because I used the wrong character. I think depending on where in the request we're talking about specifically, it's worth noting that differences between ? and & do make a difference.

Sending custom HTTP error information to Flash, JavaScript, etc

I'm developing a REST API at the moment, and one of the core features of this is that is uses a variety of HTTP status codes to return status/error information, some of which may be extended information (e.g. if an item is not found, some other similar items) which will be in the response body.
This is fine until you get to 'crippled' clients like Flash and JavaScript which can't access the response body or headers unless the HTTP status code is 200 OK (even a 201 Created success code can cause Flash to fail thinking it's an error).
So my question is, is there a standard way for allowing this type of client to request that all status codes are HTTP 200, and to indicate the real status code in another way?
One solution I was thinking of is, in the pattern of the HTTP Accept-* family of headers, using an X-Accept-Status extension header to specify which status codes can be handled, e.g. Flash would send...
X-Accept-Status: 200
...and then any status code not in this list would be mapped to one that is, and the error returned in the response body, possibly with another extension header indicating the real status code, e.g.
X-HTTP-Status-Code: 404 Not Found
This all seems a bit horrible, and working against the protocol, but if you have clients that cannot use the protocol property then that's unavoidable. I'm just looking for something a bit like X-HTTP-Method-Override (which is a 'standard' way of working around the protocol for clients that cannot send PUT/DELETE requests) but for clients that cannot understand status codes.
well, actually the problem with HTTP and REST is, that REST is a really good idea, and HTTP describes a really good implementation of it ... but really, many clients and servers only implement part of HTTP ...
i don't think HTTP is a must ... still, REST is a good idea and RESTfulness of a system is a powerful property ... so why not use HTTP as a stupid transport layer for a RESTful system?
this is what you are doing, although in my opinion, you are holding on a bit too much to HTTP and all it's theoretically built-in features ... do you really need to transport the information in a status code?
don't depend so much on your transport protocol/layer ... have a clear idea in mind, how your service should work ... seperate the protocol semantics from its implementation ... on both client and server ... abstract your RESTfulness and status codes too (make them more then just integers ... make it enums, or objects ... exceptions, why not?)...
and then plug-in protocols/transport layers at will ...
make a standard HTTP implementation
make a hacky one, using the solution you described (which to me seems perfectly valid ... if people are using technologies unable to use the standards, why should you bother too much finding the most standard-conform solution)
make whatever you have the time to do, and your server is able to do, binary, JSON, XML ... whatever seems adequate ...
two technical notes, though:
flash player does it's HTTP traffic over the browser ... and it simply does not get the status codes from the browser ... well it depends on the browser in fact ... the specs say, it does not work for: "Netscape, Mozilla, Safari, Opera, and Internet Explorer for the Macintosh." ... so IE for windows should be working? Chrome? I don't know ... but i think, it doesn't matter, since obviously, you cannot rely on it ... oh, and to state the most obvious: JavaScript also does its HTTP over the browser, of course ... so same problem here ...
for both this implies, that if you would succeed in finding something like X-HTTP-Method-Override for response, that is built in the protocol, a good browser would understand that, and would remap things accordingly, before deciding which information to give to JavaScript or 3rd-party plugins ... so you'd end up with nothing again ... i guess ...
you should simply choose your response method based on the client ... and maybe the client should send some extra info, if it is unable to use the HTTP standard ... otherwise throw at it, what follows the standard ... i'd first make an implementation using standard HTTP, yet hiding the HTTP itself away, and once everything works, write one using
greetz
back2dos
Am I wrong for thinking that one shouldn't let a crippled out-of-the-box potential client to the API dictate the features of the API implementation? I guess practical considerations win the day, but in general I guess my vote is in favor of building API implementations "properly" and requiring custom client-side programming as needed.
Bit late for that response, but...
When I implemented a flash client API with an early version of OpenRasta, I had X-ResponseLine that contained the response code and text, on each outgoing request.
As headers are by default only generic headers, they have no involvement in caching, so no reason to have an Accept / Vary on this.

Resources