I for example I've got webserver's root directory /var/www/. And user's home directory: /var/www/testuser/. I also have basic authorization setted up, so there is a user with username testuser which successfully authorized. How can I check if the testuser is browsing their home directory by the means of webserver alone? This is how far I've got:
# Getting "testuser" out of "/testuser/echo.php"
SetEnvIf Request_URI ^/(.*)/ URI_HOME=$1
# Getting base64 encoded part out of Authorization header
SetEnvIf Authorization "^Basic (.*)$" X_HTTP_AUTHORIZATION=$1
# Converting base64 part to plain text, extracting username and comparing it with home directory
SetEnvIfExpr "tolower(unbase64(%{ENV:X_HTTP_AUTHORIZATION})) == %{ENV:URI_HOME}" USER_IS_IN_HOME_DIR
The major problem is that Apache doesn't have REMOTE_USER setted up on the stage when SetEnvIf is working. So I absolutely have to parse Authorization header from request. I almost done it, but I have to cut out part after column to make comparison proper.
How can I do it?
The following seems to be working:
SetEnvIf Request_URI ^/(.*)/ URI_HOME=$1
SetEnvIf Authorization "^Basic (.*)$" X_HTTP_AUTHORIZATION=$1
SetEnvIfExpr "unbase64(%{ENV:X_HTTP_AUTHORIZATION}) -strcmatch '%{ENV:URI_HOME}:*'" USER_IS_IN_HOME_DIR
Any ideas how to improve it?
Related
Based on the documentation of docpad primary url, all requests to a document secondary url should be redirected to the primary url. But actually it respond the expected page directly when requesting any secondary urls without any redirection.
For example, you have a docpad document /src/documents/secondary-url.html.md like:
---
urls:
- '/my-secondary-urls1'
- '/my-secondary-urls2'
---
# primary url should be `secondary-url.html`
Then run command $ docpad run
It will responds status 200 when hitting either http://localhost:9778/my-secondary-urls1 or http://localhost:9778/my-secondary-urls2. While expected result is a redirect with status code 301 to http://localhost:9778/secondary-url.html
It seems an expected feature if checking this line of docpad code.
I'm curious if this is a defect or a deprecated feature?
BTW: I have a simple fix here which won't become a pull request until I read the contribution guide: https://github.com/shawnzhu/docpad/commit/731cdec43f9d9d155c8a8310494575d9746a065c
This was addressed in issue 850 of project docpad, and fixed in pull request 905, so further version than v6.70.1 of docpad will contain this fix.
I would like to download some files uploaded on my S3 Server.
For the moment, all my buckets and files inside them are public, so I can download what I want.
Unfortunately, I can't access to files using special characters like a space or "&"...
I tried to change the special characters in my URL by HTML code :
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b&b.jar
by
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b%26b.jar
But I always have the same error :
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3E987FCE07075166</RequestId>
<HostId>
O2EIujdbiAeYg44rsezQlargfT7qVSL8SpqbTxkd/1UwxQrwZ3SJ+R3NlHyGF7rI
</HostId>
</Error>
Anybody could resolve this problem ?
I can't rename them because there are used by other applications.
I am able to download public files with '&' in the name with no problems using curl:
curl https://s3.amazonaws.com/mybucket/test/b%26b.jar
Recheck the permissions on your file using the AWS console. Make sure the file has "Grantee: Everyone", and Open/Download permissions clicked, as in this screenshot:
Make sure to click the "save" button after you add these credentials. Alternatively, try using your security credentials.
I am able to download file with special character:
# wget --no-check-certificate https://s3-us-west-2.amazonaws.com/bucket1234/b%26b.jar
--2013-12-01 14:15:20-- https://s3-us-west-2.amazonaws.com/bucket1234/b%26b.jar
Resolving s3-us-west-2.amazonaws.com... 54.240.252.26
Connecting to s3-us-west-2.amazonaws.com|54.240.252.26|:443... connected.
WARNING: certificate common name `*.s3-us-west-2.amazonaws.com' doesn't match requested host name `s3-us-west-2.amazonaws.com'.
HTTP request sent, awaiting response... 200 OK
Length: 0 [application/x-java-archive]
Saving to: `b&b.jar'
[ <=> ] 0 --.-K/s in 0s
2013-12-01 14:15:22 (0.00 B/s) - `b&b.jar' saved [0/0]
Are you sure that this file is "Publicly visible"? could you double check the permissions for this file ? This is definitely not an issue with the special character.
Can you just login to aws s3 console and check what download link shows there?
Is there any mismatch in the link because of double encoding? Please make sure you are not doing any URL encoding from your code while uploading file.
In your case it could be:
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b%2526b.jar
My site at www.kruaklaibaan.com (yes I know it's hideous) currently has 3.7 million likes but while working to build a proper site that doesn't use some flowery phpBB monstrosity I noticed that all those likes are registered against an invalid URL that doesn't actually link back to my site's URL at all. Instead the likes have all been registered against a URL-encoded version:
www.kruaklaibaan.com%2Fviewtopic.php%3Ff%3D42%26t%3D370
This is obviously incorrect. Since I already have so many likes I was hoping to either get those likes updated to the correct URL or get them to just point to the base url of www.kruaklaibaan.com
The correct url they SHOULD have been registered against is (not url-encoded):
www.kruaklaibaan.com/viewtopic.php?f=42&t=370
Is there someone at Facebook I can discuss this with? 3.7m likes is a little too many to start over with without a lot of heartache. It took 2 years to build those up.
Short of getting someone at Facebook to update the URL, the only option within your control that I could think of that would work is to create a custom 404 error page. I have tested such a page with your URL and the following works.
First you need to set the Apache directive for ErrorDocument (or equivalent in another server).
ErrorDocument 404 /path/to/404.php
This will cause any 404 pages to hit the script, which in turn will do the necessary check and redirect if appropriate.
I tested the following script and it works perfectly.
<?php
if ( $_SERVER['REQUEST_URI'] == '/%2Fviewtopic.php%3Ff%3D42%26t%3D370' ) {
Header("HTTP/1.1 301 Moved Permanently");
Header("Location: /viewtopic.php?f=42&t=370");
exit();
} else {
header('HTTP/1.0 404 Not Found');
}
?><html><body>
<h1>HTTP 404 Not Found</h1>
<?php echo $_SERVER['REQUEST_URI']; ?>
</body></html>
This is a semi-dirty way of achieving this, however I tried several variations in Apache2.2 using mod_alias's Redirect and mod_rewrite's RewriteRule, neither of which I have been able to get working with a URL containing percent encoded chars. I suspect that with nginx you may have better success at a more graceful way to handle this in the server.
I am developing an application in rails which requires checking whether a sitemap of the entered website's URL exists or not? For Eg if a user enters http://google.com then it should return "Sitemap present".I have seen for solutions that usually websites have either /sitemap.xml or /sitemap at the end of their URL.So i tried putting a check for this using typhoeus gem, checking response.code for the URL(like www.google.com/sitemap.xml OR www.apple.com/sitemap) that if it returns with a 200 or 301, then sitemap exists, else not.But i have found that some sites return a 301 even if they dont have a sitemap, they redirect it to their main page(For Eg http://yournextleap.com/sitemap.xml), hence i don't get a conclusive result.Any help would be really great.
Here is my sample code to check for sitemap using typhoeus :
# the request object
request = Typhoeus::Request.new("http://apple.com/sitemap")
# Run the request via Hydra.
hydra = Typhoeus::Hydra.new
request.on_complete do |response|
if response.code == 301
p "success 301" # hell yeah
elsif response.code == 200
p "Success 200"
elsif response.code == 404
. puts "Could not get a sitemap, something's wrong."
else
p "check your input!!!!"
end
The HTTP response status code 301 Moved Permanently is used for
permanent redirection. This status code should be used with the
location header. RFC 2616 states that:
If a client has link-editing capabilities, it should update all references to the Request URI.
The response is cachable.
Unless the request method was HEAD, the entity should contain a small hypertext note with a hyperlink to the new URI(s).
If the 301 status code is received in response to a request of any type other than GET or HEAD, the client must ask the user before redirecting.
I don't think its fair for you to assume that a 301 Response indicates that there was ever a sitemap. If you're checking the existence of a sitemap.xml or a sitemap directory then the correct response to expect is a 2XX.
If you're insistent on assuming that a 3XX request indicates a redirect to a sitemap, then follow the redirect and add logic to check the url of the page (if its the homepage) or the content of the page to see if it has XML structure.
Sitemap may also be compressed to sitemap.xml.gz -- so you may have to check for that filename too. Also, it may have an index file that points to many other sub sitemaps which also may be named differently.
For examples in my project I have:
sitemap_index.xml.gz
-> sitemap_en1.xml.gz (english version of links)
-> sitemap_pl1.xml.gz (polish version of links)
-> images_sitemap1.xml.gz (only images sitemap)
Websites ping search engines with those filenames, but sometimes they also may include them in the /robots.txt file, so you may try hunting for them in there. For example http://google.com has this at the end of their file:
(See how weird sitemaps' names can be!)
Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http://www.google.com/hostednews/sitemap_index.xml
Sitemap: http://www.google.com/ventures/sitemap_ventures.xml
Sitemap: http://www.google.com/sitemaps_webmasters.xml
Sitemap: http://www.gstatic.com/trends/websites/sitemaps/sitemapindex.xml
Sitemap: http://www.gstatic.com/dictionary/static/sitemaps/sitemap_index.xml
About 301: you may try spoofing as a Google Bot or other crawler. Maybe they redirect everyone except robots. But if they redirect everyone, there's nothing you can really do about it.
I am trying to get access token using from facebook graph API in my rails 2.3 based web application. The request I am sending for that is :
https://graph.facebook.com/oauth/access_token?client_id=<client_id>
&redirect_uri=http://localhost:3001/facebook_callback
&client_secret=<client_secret>
&code=AQBgog2NvoUYQCXsa2bGpj--s9RD71F3zTKX344cUZ-
AWX4CNhdx3Yerl_wmzQkQ4zIUFVS_CRoN0zXaEW63dHcC9sH6_
vl7ljSxwA6TLSrkWVcfdfdrmwBTlMNIzyJr0h6irGW1LCdTw8
Racgd8MQ9RgVn1gFL26epWA
And it is redirecting me to
http://localhost/facebook_callback?code=AQBgog2NvoUYQCXsa2bGpj--
s9RD71F3zTKX344cUZ AWX4CNhdx3Yerl_wmzQkQ4zIUFVS_CRoN0mAB_Sr1H4K
dXIlzXaEW63dHcC9sH6_vl7ljSxwA6TLSrkWVcfdfdrmwBTlMNIzyJr0h6irG
SxsrRAXtdviNsBTMW1LCdTw8Racgd8MQ9RgVn1gFL26epWA
I am getting error in both development and production environment . I am not able to get the access token. Has anyone else face the problem ??
This looks correct - Facebook redirects to your redirect url with the code= parameter. You then need to exchange the code for an access token. See here: http://developers.facebook.com/docs/authentication/
edit: my bad, I misread the first section. You can sometimes have problems using localhost as a redirect. Are you using a live domain without port in your non-test environment?
Well, I found solution of my problem :
The problem was with the path which I was using for request of access_token . I placed a slash in front of the path and bingo. It worked like a charm.
So instead of
oauth/access_token?client_id=#{ #client_id }&redirect_uri=#{ #redirect_uri }&client_secret=#{ #client_secret }&code=#{ code }"
we just need to use
/oauth/access_token?client_id=#{ #client_id }&redirect_uri=#{ #redirect_uri }&client_secret=#{ #client_secret }&code=#{ code }".
Thanks to all people for your efforts.