I want to create a proxy controller in grails, something that just takes whatever is passed in based on a url mapping, records what was asked for, sends the request to another server, records the response, and send the response back to the browser.
I'm having trouble with when the request has an odd file extension (.gif) or no file extension (/xxx?sdcscd)
My url mapping is:
"/proxy/$target**"
and I've attempted (per an answer to another question):
def targetURL = params.target
if (!FilenameUtils.getExtension(targetURL) && request.format) {
targetURL += ".${response.format}"
}
but this usually appends .html and never the .gif or ?csdcsd
Not sure what to do as I might just write the thing in straight Java
Actually, the real answer was sitting in the post you linked to previously all along, by Peter Ledbrook:
Disable file extension truncation by adding this line to grails-app/conf/Config.groovy:
grails.mime.file.extensions = false
This will disable the usage of file extensions for format, but will leave the file extension on params.target. You can completely ignore response.format!
Related
I'm trying to post a user status update to the Goodreads API.
Most of the time my request returns 200 OK and does nothing. Every now and then, though, it returns 201 Created and the status is updated. When it works it's always the first time I try to make the call after running the app in iOS simulator. Subsequent calls never work.
I don't think the problem is the API itself, since the official Goodreads iOS app uses the same call and it always works.
Their API is famous for having problems with calls that include brackets in the parameters, but I can make other calls that contain brackets and they work fine, the problem is just this one.
I'm using OAuthSwift and this is my code:
oAuth.client.post(
"http://www.goodreads.com/user_status",//.xml",//?user_status[book_id]=6366035&user_status[page]=168",
parameters: ["user_status[page]" : 168, "user_status[book_id]" : 6366035, "format" : "xml"],
//headers: ["Content-Type" : "application/x-www-form-urlencoded"],
success: {
data, response in
print("")
print(response)
},
failure: {
error in
print("")
print(error)
}
)
(The commented out parts are alternatives I have tried unsuccessfully.)
I'm printing the base string that gets signed and it looks the same for the calls that work and the ones that don't, except for the nonce and the timestamp, obviously.
In the headers is also included the oauth_signature, which changes every time and sometimes contains characters that are encoded by OAuthSwift, so that could account for the call working just some of the time (it could work only when the signature doesn't contain a certain character)… but I'm printing out the headers too and I don't see any patterns or any discernible difference between the headers of the calls that work and those of the calls that don't.
So now I don't know what to test anymore… I'm checking the base string and the headers for calls that work and for calls that don't and they look the same… Could anybody think of something else that changes between calls and I should check? I have no idea what could be causing this and I don't know how to debug it.
Thanks in advance,
Daniel
Edit: Very weird… I tried my request with Paw, a Mac REST client, and with Chrome's Postman extension. If I use https I get 404 on my first call, then 201 on the second, then 404 on the third, 201 on the forth and so on. It works every other time. The time it works it doesn't matter if I use http or https, it works as long as there was a failed https request just before.
So I tried doing the same in my app: I added two https calls one after the other… in my app they always return 404.
So it seems like Postman, Paw and OAuthSwift are handling the requests differently. I don't know what could be the difference between those clients… the signature base string seems to be the same for all three, the headers too… so what else could change between them?
In the newer versions of Xcode you can only communicate with a HTTPS server. I expect Google support that so you can change the URL. Or you can edit your Info.plist file.
App Transport Security Settings > Allow Arbitrary Loads > YES
I'm creating a program to download web pages using the Http module of FSharp.Data. However, the module doesn't support setting http proxy server. In C# there is
_httpWebRequest.Proxy =
new System.Net.WebProxy("http://proxy.myCompany.com:80", true);
I tried to download the file from https://github.com/fsharp/FSharp.Data/blob/master/src/Library/Http.fs and use it directly in my F# project. However, the type of response changed from string to HttpResponse after I call the Http.Request from the downloaded file.
let response =
Http.Request (
url,
query=["userid", user; "password", password; "login", "Sign+On"],
meth="POST",
cookieContainer = cc)
What's the best way to extend the Http module with proxy support?
In FSharp.Data 2.0, you can pass the parameter customizeHttpRequest of type HttpWebRequest->HttpWebRequest to set the proxy like this:
Http.Request (
url,
query=["userid", user; "password", password; "login", "Sign+On"],
meth="POST",
cookieContainer = cc,
customizeHttpRequest = (fun req -> req.Proxy <- WebProxy("http://proxy.myCompany.com:80", true); req))
In the new version (upcoming release), we are renaming the current Http.Request to Http.RequestString and the current Http.RequestDetailed to Http.Request. This is a breaking change, but we thing it makes much more sense (and fits better with standard .NET naming). If you just want to copy the old file, you can always get the older version of the code from the appropriate branch on GitHub (e.g. Http.fs # tag v1.1.10).
However, I think that supporting HTTP proxies would be a great addition to the library. So the best thing to do would be to fork the project to your GitHub, add the feature and submit a pull request! I think that just adding an optional ?proxy parameter to the two methods and propagating the information to the underlying HttpWebRequestwould be the best way to do this.
The only tricky thing is that Http.Request should work on multiple versions of .NET (including Windows Phone, Silverlight, etc.) so you may need to check which of them actually support specifying the proxy.
If you do not have the time for helping out, then please submit a GitHub issue.
Have you tried overriding the proxy globally with WebRequest.DefaultWebProxy = new System.Net.WebProxy("http://proxy.myCompany.com:80", true)?
My site at www.kruaklaibaan.com (yes I know it's hideous) currently has 3.7 million likes but while working to build a proper site that doesn't use some flowery phpBB monstrosity I noticed that all those likes are registered against an invalid URL that doesn't actually link back to my site's URL at all. Instead the likes have all been registered against a URL-encoded version:
www.kruaklaibaan.com%2Fviewtopic.php%3Ff%3D42%26t%3D370
This is obviously incorrect. Since I already have so many likes I was hoping to either get those likes updated to the correct URL or get them to just point to the base url of www.kruaklaibaan.com
The correct url they SHOULD have been registered against is (not url-encoded):
www.kruaklaibaan.com/viewtopic.php?f=42&t=370
Is there someone at Facebook I can discuss this with? 3.7m likes is a little too many to start over with without a lot of heartache. It took 2 years to build those up.
Short of getting someone at Facebook to update the URL, the only option within your control that I could think of that would work is to create a custom 404 error page. I have tested such a page with your URL and the following works.
First you need to set the Apache directive for ErrorDocument (or equivalent in another server).
ErrorDocument 404 /path/to/404.php
This will cause any 404 pages to hit the script, which in turn will do the necessary check and redirect if appropriate.
I tested the following script and it works perfectly.
<?php
if ( $_SERVER['REQUEST_URI'] == '/%2Fviewtopic.php%3Ff%3D42%26t%3D370' ) {
Header("HTTP/1.1 301 Moved Permanently");
Header("Location: /viewtopic.php?f=42&t=370");
exit();
} else {
header('HTTP/1.0 404 Not Found');
}
?><html><body>
<h1>HTTP 404 Not Found</h1>
<?php echo $_SERVER['REQUEST_URI']; ?>
</body></html>
This is a semi-dirty way of achieving this, however I tried several variations in Apache2.2 using mod_alias's Redirect and mod_rewrite's RewriteRule, neither of which I have been able to get working with a URL containing percent encoded chars. I suspect that with nginx you may have better success at a more graceful way to handle this in the server.
d3.csv("result.csv", function(flights) {
var nestByDate = d3.nest()
.key(function(d) { return d3.time.day(d.date); });
..........
When I am trying to run above d3.js code from web server then it executes d3.js properly by loading csv file.
but when I am trying to run d3.js as shown below,
d3.csv("D:\\Project Space\\D3Demo\\WebContent\\result.csv", function(flights) {
var nestByDate = d3.nest()
.key(function(d) { return d3.time.day(d.date); });
..........
then it shows following error:
XMLHttpRequest cannot load file:///D:/Project%20Space/D3Demo/WebContent/result.csv. Cross origin requests are only supported for HTTP`
How to solve this problem ?
There is no way to solve the problem using D3's convenience functions.
d3.csv fundamentally is an AJAX request and is beholden to the same-origin policy.
When you load the file location, your browser realizes that the requested file does not exist on the same domain (likely localhost in your case) and prevents the request from completing.
A simple way to get around this would be to simply serve the content over localhost or whatever you are using.
Alternatively you can look in to Cross-origin Resource Sharing, or for better compatibility: JSONP. In both of these cases you will likely have to roll your own function to convert the CSV data into a javascript array.
I'm trying out http requests to download a pdf file from google docs using google document list API and OAuth 1.0. I'm not using any external api for oauth or google docs.
Following the documentation, I obtained download URL for the pdf which works fine when placed in a browser.
According to documentation I should send a request that looks like this:
GET https://doc-04-20-docs.googleusercontent.com/docs/secure/m7an0emtau/WJm12345/YzI2Y2ExYWVm?h=16655626&e=download&gd=true
However, the download URL has something funny going on with the paremeters, it looks like this:
https://doc-00-00-docs.googleusercontent.com/docs/securesc/5ud8e...tMzQ?h=15287211447292764666&\;e=download&\;gd=true
(in the url '&\;' is actually without '\' but I put it here in the post to avoid escaping it as '&').
So what is the case here; do I have 3 parameters h,e,gd or do I have one parameter h with value 15287211447292764666&ae=download&gd=true, or maybe I have the following 3 param-value pairs: h = 15287211447292764666, amp;e = download, amp;gd = true (which I think is the case and it seems like a bug)?
In order to form a proper http request I need to know exectly what are the parameters names and values, however the download URL I have is confusing. Moreover, if the params names are h,amp;e and amp;gd, is the request containing those params valid for obtaining file content (if not it seems like a bug).
I didn't have problems downloading and uploading documents (msword docs) and my scope for downloading a file is correct.
I experimented with different requests a lot. When I treat the 3 parameters (h,e,gd) separetaly I get Unauthorized 401. If I assume that I have only one parameter - h with value 15287211447292764666&ae=download&gd=true I get 500 Internal Server Error (google api states: 'An unexpected error has occurred in the API.','If the problem persists, please post in the forum.').
If I don't put any paremeters at all or I put 3 parameters -h,amp;e,amp;gd, I get 302 Found. I tried following the redirections sending more requests but I still couldn't get the actual pdf content. I also experimented in OAuth Playground and it seems it's not working as it's supposed to neither. Sending get request in OAuth with the download URL responds with 302 Found instead of responding with the PDF content.
What is going on here? How can I obtain the pdf content in a response? Please help.
I have experimented same issue with oAuth2 (error 401).
Solved by inserting the oAuth2 token in request header and not in URL.
I have replaced &access_token=<token> in the URL by setRequestHeader("Authorization", "Bearer <token>" )