IIS instances sharing data - asp.net-mvc

I don't know if this is common or something but I wanted to check. So I am building a site on an iis7 server and coming across a weird problem. Whenever I have 2 clients accessing the site it seems they are sharing info. Here is an example, when one client does a search for a particular item, the other client goes to the search page and see's the results of client's one search results. I am using a global class to store this information on my code behind.
So here is my question, my understanding of servers was that if two clients accessed the server they were running on different instances of the site, meaning that even if I have a global class in my code it would be as if two machines were running it. Am I wrong in this understanding?
Also are there settings in IIS that I need to change for this to work?

In asp.net, you can use Session variables which are unique serialized token type things stored in server memory. You can store html form info in these sessions so another page on your site can read it.
The syntax in your MVC controller action to create a Session would be:
Session["MyFormData"] = someObject;
http://msdn.microsoft.com/en-us/library/ms178581.aspx

Related

How can I use Domain-based routing?

I have a simple web server running Windows 2012 with IIS. I have half a dozen domains linked to this server that are basically not in use yet. I have a few more domains which are used but they could all have various subdomains that aren't supported by any site yet. So I have a default site in IIS set to catch all incoming requests that aren't handled by any other site on the server or any other server. And it's main purpose is to show a "Page not in use yet" message.
That's easy to set up but I want these pages to be a bit more fancy. So I want to have some kind of routing based on the domain name so example.com and sub.example.com and sub.sub.example.com would all be handled by the same view, but anotherexample.com would be handled by a different view and thirdexample.com by yet another view. And any domain that is not caught by this routing system would go to the default view.
And I wonder if there's a simple way to do this. Something like [route("example.com")] as a controller attribute which the system would recognize as the controller for a specific domain and it's subdomains. (And the URL path can be ignored.) I don't know if something like this already exists and have used Google but found nothing yet.
I can create a custom route, of course. But this tends to result in an if-then-else situation for all potential domain names. I need to know if there's a better method.
Use the URL rewrite module for IIS:
https://learn.microsoft.com/en-us/iis/extensions/url-rewrite-module/using-the-url-rewrite-module

MVC 5 how to achieve POST that behaves like a redirect to GET with content

My client redirects to a https://domain.com/Controller/GetInfo?Querystring method. Now my query string is getting dangerously close to the 2K limit, so I need to reproduce this behavior but pack my query string into the content of the messages. Since it would be heresy (etc.) to try a GET with content, I'll use a POST. However, I can't redirect to a POST since a Redirect has no content.
So, what I am looking for is the best MVC 5 pattern to resolve this: I need to provide lots of content, but I want the resulting page hosted on my remote server (i.e. as if I had redirected)
Also, since I use load balanced servers in azure, I'd prefer maintaining my clean stateless server if at all possible (else I'll have to introduce session caching).
#AntP is absolutely right in the comments above. If your query string is approaching 2K, then you're abusing it.
If there's a particular object you're referencing, then you can simply include the id or some other identifying piece of it and use that to look it up again from your data store.
If there's no persistent record of the object, then you can use something like Session or TempData to store it between one request and the next.
Regardless, it's not possible to redirect with a request body, with also means it's not possible to redirect using POST. The reason for this that the a redirect is not something the server does, but rather the client. The server merely suggests that the client go to a different URL. It's then up to the client (web browser) to issue a new request for that URL. Since the client is the one issuing the request, it makes the decision about what data is or isn't included in that request, not the server.

Changing the interface of a webservice witout having access to it

I have awebsite, lets just call it search, in one of my browserpages open. search has a form, which when submitted runs queries on a database to which I don't have direct access. The problem with search is that the interface is rather horrible (one cannot save the aforementioned queries etc.)
I've analyzed the request (with a proxy) which is send to the server via search and I am able to replicate it. The server even sends back the correct result, but the browser is not able to open it. (Same origin policy). Do you have any ideas on how I could tackle this problem?
The answer to your question is: you can't. At least not without using a proxy as suggested in the answer by Walter, and that would mean your web site visitors would have to knowingly login to your web site using their other web site's credentials (hmm doesn't sound good...)
The reason you can't do this is related to security, if you could run a script on the tab next to the one with the site open (which is what I'm guessing you want to do), you would be able to do a CSRF attack and get any data you wish and send it to hack.com
This is, of course, assuming that there has to be a login somewhere in the process, otherwise there's no reason for you to not be able to create a simple form which posts the required query and gets the info.
If you did have access to the mentioned website, you would be able to support cross domain xml using JSONP.
It is not possible to bypass the same origin policy in javascript (assuming that you want to do it with that considering your question). You need to set up a proxy server side that is doing the request for you and returns the html.
A simple way of doing this in PHP would be like this:
<?php
echo file_get_contents("http://searchdomainname.com" . "?" . http_build_query($_GET, '', '&'));
?>

SEO Destroyed By URL Forwarding - Can't figure out another way

We design and host websites for our clients/sales force. We have our own domain: http://www.firstheartland.com
Our agents fill out a series of forms on our website that are loaded into a database. The database then renders the website as a database driven website.
/repwebsites/repSite.cfm?link=&rep=rick.higgins
/repwebsites/repSite.cfm?link=&rep=troy.thompson
/repwebsites/repSite.cfm?link=&rep=david.kover
The database application reads which "rep" the site is for and the appropriate page to display from the query string. The page then outputs the content and the appropriate CSS to style the page and give it its own individual branding.
We have told the user to use Domain Name Forwarding to get the users to their spot on our server. However, everyone seems to be getting indexed under our domain instead of their own. We could in theory assign an new IP to them, the cost is not the issue.
The issue is how we would possibly accomplish this.
With all of that said, them being indexed under our domain would still be OK as long as they would actually show up high in the ranking for their search term.
For instance, an agent owns TroyLThompson.com. If I search Troy L Thompson, It does not show up in my search. Only, "troy thompson first heartland" works (they show up third)
Apart from scrapping the whole system, I don't know what to do. I'm very open to ideas.
I'm sure you can get this to work as most hosting companies will host hundreds of websites on a single server (i.e. multiple domains on one IP).
I think you need your clients to update the nameservers for their domains (i.e. DNS) to return the IP address of your hosting server. Then you need to configure your server to return the right website based on the domain that was originally requested.
That requires your "database driven website" to look in the HTTP request and check which domain was originally requested, then it can handle the request accordingly.
- If you are using Apache, see how to configure Apache to host multiple domains on one IP address.
- If you are using Microsoft IIS, maybe Host-Header Routing is what you need.
You will likely need code changes on your "database driven website" to cope with these changes.
I'm not sure that having a dedicated IP address per domain will help much, as then you have to find a way to host all those IP addresses from a single web server. However, if your web server architecture already supports a shared database and multiple servers, then that approach might work well for you, especially if you expect the load from some domains to be so heavy that you need a dedicated web server for them.
Google does not include URL in its index which return a 301 status code. The reason is pretty obvious on second thought, because the redirect tells Google "Whatever was here before has moved there, please update your references". One solution I can see is setting up Apache virtual hosts on your server for each external domain, and have each rep configure their domain's DNS A record to point to the IP address of your server.

Account based lookup in ASP.NET

I'm looking at using ASP.NET for a new SaaS service, but for the love of me I can't seem to figure out how to do account lookups based on subdomains like most SaaS applications (e.g. 37Signals) do.
For example, if I offer yourname.mysite.com, then how would I use ASP.NET (MVC specifically) to extract the subdomain so I can load the right template (displaying your company's name and the like)? Can it be done with regular routing?
This seems to be a common thing in SaaS so there has to be an easy way to do it in ASP.NET; I know there are plugins that do it for other frameworks like Ruby on Rails.
This works for me:
//--------------------------------------------------------------------------------------------------------------------------
public string GetSubDomain()
{
string SubDomain = "";
if (Request.Url.HostNameType == UriHostNameType.Dns)
SubDomain = Regex.Replace(Request.Url.Host, "((.*)(\\..*){2})|(.*)", "$2");
if (SubDomain.Length == 0)
SubDomain = "www";
return SubDomain;
}
I'm assuming that you would like to handle multiple accounts within the same web application rather than building separate sites using the tools in IIS. In our work, we started out creating a new web site for each subdomain but have found that this approach doesn't scale well - especially when you release an update and then have to modify dozens of sites! Thus, I do recommend this approach rather than the server-oriented techniques suggested above based on several years worth of experience doing exactly what you propose.
The code above just makes sure that this is a fully formed URL (rather, say, than an IP address) and returns the subdomain. It has worked well for us in a fairly high-volume environment.
You should be able to pick this up from the ServerVariables collection, but first you need to configure IIS and DNS to work correctly. So you know 37Signals probably use Apache or another open source, unix web server. On Apache this is referred to as VirtualHosting.
To do this with IIS you would need to create a new DNS entry (create a CNAME yourname.mysite.com to application.mysite.com) for each domain that points to your application in IIS (application.mysite.com).
You then create a host header entry in the IIS application (application.mysite.com) that will accept the header yourname.mysite.com. Users will actually hit application.mysite,com but the address is the custom subdomain. You then access the ServerVariables collection to get the value to decide on how to customize the site.
Note: there are several alternative implementations you could follow depending on requirements.
Handle the host header processing at a hardware load balancer (more likely 37Signals do this, than rely on the web server), and create a custom HTTP header to pass to the web application.
Create a new web application and host header for each individual application. This is probably an inefficient implementation for a large number of users, but could offer better isolation and security for some people.
You need to configure your DNS to support wildcard subdomains. It can be done by adding an A record pointing to your IP address, like this:
* A 1.2.3.4
Once its done, whatever you type before your domain will be sent to your root domain, where you can get by splitting the HTTP_HOST server variable, like the user buggs said above:
string user = HttpContext.Request.ServerVariables["HTTP_HOST"].Split(".")
//use the user variable to query the database for specific data
PS. If you are using a shared hosting you're probably going to have to by a Unique IP addon from them, since it's mandatory for the wildcard domains to work. If you're using a dedicated hosting you already have your own IP.
The way I have done it is with HttpContext.Request.ServerVariables["HTTP_HOST"].Split(".").
Let me know if you need more help.

Resources