Hide URL for server side redirect - http-redirect

Configuration: Server is iplanet 6.1, client side browser is ie6
I have a url: http://example.com/signOnTodef?cmd=login
This request is basically received by the iplanet 6.1. Here I have a plugin which does some cookie authentications and redirects it to another server:
http://example.net?theQueryStringFromBeforeABitModified
So this is a server side redirect (302).
Now my problem is that this url http://def.com?theQueryStringFromBeforeABitModified shows up on the browser. I don't want this to happen. What can I do to stop the same?

HTTP 302 is not a "Server Side" redirect. It is just a redirect. You can't do a server side redirect, because it's the client you want to see the page, not your server.

You could try doing something like this equivalent in PHP. I don't know what an iplanet server is or if it has PHP installed even, but:
<?php
$file = file_get_contents('http://def.com'.$queryString);
echo str_ireplace('<head>', '<head>'."\n\t".'<base href="http://def.com/" />', $file);
?>
Or something like that might work depending on the file you are trying to display to the user.
Keep in mind there is no redirect happening here. Your server will be downloading http://def.com for every request and then outputting it to the user.

Related

Request with proxy getting blocked from web server but not on browser

I am trying to make a request to a website with a proxy using httparty like so:
def self.fetch_page_with_html_response(url, proxy_id)
proxy = Proxy.find(proxy_id)
request_options = {
http_proxyaddr: proxy.url, http_proxyport: proxy.port, http_proxyuser: proxy.username, http_proxypass: proxy.password,
headers: {"User-Agent" => proxy.user_agent}
}
response = HTTParty.get(url, request_options)
response
end
On certain websites my requests either hangs or returns an error page where the website is blocking me from fetching the page.
When i use these same proxy settings in my Chrome browser using an extension like SwitchyOmega the requests goes through fine and the page loads.
Is there any reason why the request would be getting blocked from my web server but not through my browser?
I even tested using the same user agent and providing the same exact headers my browser is sending.
There could be some reasons in your case.
Can you check if proxy works correctly?
Please send get request to https://api.ipify.org/ using your proxy in code.
If it returns correct ip address, then the proxy works.
Please disable javascript run in chrome settings. Then browse the website via proxy.
And check if website load correctly.
Because some websites render html and css using javascript.
Please feel free to reply me if you still need help.

Redirect http request to another address

I have a software that was developed in delphi and that makes a http get request to a specific url that I want to redirect to another url.
I have the source code but I can't recompile it because it wasn't developed by me, so it would take too much effort and time, and right now I can't afford it.
Anyway, back to my problem.
I tried using fiddler 2. It worked, but only for when I try accessing the url via browser. When my application sends the requests, it doesn't get redirected to the new url.
Does anyone have any other suggestion of what I can do?

How google url with # works

How the URL like https://www.google.co.in/#q=harry+potter works?
As per my understanding anything after the # is not sent to the server.
Now if we paste the above URL in browser then it get the search page for Harry Potter.
As per my understanding when one request the above URL a request will be sent to server and since the search term "Happy Potter" is after the '#' it won't be sent to the server. So server wont have anyway to determine what to search? So then how it works. Does browser does anything special ?
Your understanding is correct, the server does not see your search term.
It's a client side JavaScript that is executed upon page load and inspects the url. It then executes an XHR request with the search term appended in a way that is visible to the server (https://www.google.co.in/search?q=harry+potter&...).
Reload the page with JavaScript disabled and you will see that you are getting the regular page without pre-filled search box and results.

Is it possible to capture every server requests resulting in a 404?

We are currently capturing the requested URL when someone gets redirected to our 404 page. However, this does not allow us to see reports on things like broken images. Is it possible to get this information in SiteCatalyst, for example by taking the URL of every server request that received a 404 response and store it in a a variable? What would be a sensible way to go about this? I Googled and couldn't find anything
I want to be able to pull a report on every broken URL reference of a site and the page it happened on...
You can configure your webserver, says Apache, for instance, to redirect an 404 error to a specific web page, says:
ErrorDocument 404 /my_path/not_found.html
Then you can configure your dispatch inside the not_found.html, in a embedded JavaScript.
Here's how to configure apache to redirect this error request:
http://httpd.apache.org/docs/2.2/custom-error.html

Facing issue while trying to check the Incoming request in Fiddler

I am trying to check the incomming request to my server. Another server which has hosted MVC application. An action method is sending some data to my server. I am using Fiddler. But somehow it is not showing the incoming request.
Below mentioned are my settings in Fiddler Custom Rules..
static function OnBeforeRequest(oSession: Session) {
if (oSession.host.toLowerCase() == "IP Address:8888")
oSession.host = "IP Address:82";
}
Below mentioned are my Fiddler Options.
Am I missing anything ?
It sounds like you're trying to use Fiddler as a reverse proxy. You should read the steps at http://www.fiddler2.com/r/?reverseproxy. The biggest thing to understand is that when running as a reverse proxy, you only see traffic in Fiddler if the client's URL is changed to point at Fiddler.
If it is ssl connection then you need to enable option 'capture https connection' from 'https' tab. Did you try to invoke request from other browser like chrome ? Does fiddler capture anything?
You don't need custom rule for this case. It should work if you enable these settings. I have faced only some problems in other browsers like FF.
I'm not sure I can answer your question fully without knowing a few additional pieces of information.
If the request being made is not a HTTP request, Fiddler will not be able to handle it.
Also, if you're using the loopback address localhost then Fiddler may not be able to find it.

Resources