Umbraco 7 in Server 2008 R2 NLB with IIS shared configuration - umbraco

I've posted this in the Umbraco forums but haven't gotten much feedback there yet. We are in the process of setting up Umbraco 7 in the following environment:
Two Windows Server 2008 R2 Web Server edition systems in a Windows Network Load Balanced (NLB) cluster (two NICs each with unique private management IP addresses)
Nodes use an IIS 7.5 shared configuration
Content is maintained on a clustered Windows file server and accesed by the web servers via a UNC share
We are planning on hosting the Umbraco files/content on the network share (as opposed to the extra overhead of setting up a file replication system).
I have followed this document - http://our.umbraco.org/documentation/installation/load-balancing - as best I could, but it doesn't directly address the issue of using an IIS shared configuration. Given that, can anyone help with the following questions?
1) Since there’s no mention in the documentation of operating in an IIS shared configuration, I see some potential problems, especially regarding the section that talks about assigning unique host headers to each server in the cluster:
With an IIS shared configuration host headers are shared among the participating hosts
Host headers don’t really mean anything in an HTTPS environment anyway (we are planning to operate the site as 100% HTTPS)
It’s not clear how much this matters because each cluster node does have a unique backend IP address which, as far as I can tell, is the important part
2) The documentation stresses designating a single server as the “back office server” for administration of content and states that this is even more important when using a shared content arrangement like we are planning to do. The issue seems to be with preventing file locks from interfering with each other.
Could this be handled in the NLB configuration by making sure the admin traffic site is only ever handled by a single host at any given time? (This would be similar to how we manage SMTP traffic in the cluster – when the first node is up it always handles SMTP traffic. If that node is down at any point, the second node handles SMTP traffic until the first node rejoins the cluster.)
Or is it crucial that one node always handles admin/content update traffic and content updating will simply not be available if that node is down at any given time?
How is the admin server identified? What prevents a secondary server from providing access to the administration pages?
Thanks for any and all feedback!
UPDATE: Dealing with #1 hasn't been too hard - I'm not using host headers but that doesn't seem to matter. I've bound each of the nodes' unique IP addresses to the site in IIS so they each respond as needed (and effectively ignore IP addresses that don't apply). Using a wildcard certificate has simplified this as well.
Handling #2 has been a little trickier. There's nothing in Umbraco that looks at the hostname being used so each node will provide access to the admin pages if they receive requests for them. As mentioned in my comment below I've been looking at the URL Rewrite module in IIS to ensure admin traffic only goes to one host, but what gets redirected is an additional question. See the thread I have at the Umbraco forums for more on that.

Unfortunately I don't know a huge amount about the load balance setup as I worked with the host to get that setup but I can confirm we did have significant problems before forcing a single admin host. To begin with editors could edit from either host and changes were not appearing on the other host and it caused macro script errors also on the other host as a result of the Lucene index missing information that was only available on the editing host.
To force users onto a single host for editing I simply used a redirect rule in the Url Rewriting module for people access the Umbraco back end.
<rule name="Editor Server Node Redirect" enabled="true" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{HTTP_HOST}" matchType="Pattern" pattern="^admin1.example.com$" ignoreCase="true" negate="true" />
<add input="{PATH_INFO}" pattern="^/umbraco/login.aspx$" />
</conditions>
<action type="Redirect" url="http://admin1.example.com/{R:1}" />
</rule>
Simon

Related

Can I restrict who accesses my Azure website to people in the US?

I want to limit the people who can see and visit my website to only those in the US! I know I can restrict IP addresses to only those I give in my web.config file. Is there some subnet mask or something I can use below to define all US ip addresses or is there some Azure settings I can configure in the Azure portal?
ex.
<system.webServer>
<security>
<ipSecurity allowUnlisted="false" denyAction="NotFound">
<add allowed="true" ipAddress="24.130.112.11" subnetMask="255.255.0.0" />
<add allowed="true" ipAddress="73.92.189.234" subnetMask="255.255.0.0" />
</ipSecurity>
</security>
There is no Azure setting you can configure in the Azure portal to restrict access based on IP address from any region.
I suggest you could use some nuget package such as MaxMind.GeoIP2 which can provide GeoIP lookup functionality to get the location information such as country and city based on the client IP address.
This way would be better than having to find out and whitelist all the IP addresses range and subnet masks for U.S.
As far as I know, that is the only way to restrict IP addressees in Azure. You can also use IP Restrictions menu in Azure app services to add or remove restrictions.
However, I would argue that the approach to limiting users who can access your site by region (IP address) is not the correct solution. It seems like an unnecessary complication that can run into both technical and logical issues. Would you want someone who works in San Francisco but works outside of the city to only be able to access your site when they are in the office?
I think there are a handful of more appropriate and easier to manage solutions such as:
Prompting the user (if they have never visited your site before) with a message saying the site is in test/alpha/beta and is only intended for X users (English speaking, metropolitan workers, etc.)
Ask for information on login that matches your description (language, region, location, etc.) and either redirect them to the correct location or have them gracefully leave the site
Distribute beta keys to anyone you want to access your site. You can be very specific about who you invite especially if you have a certain demographic in mind, which may prove more valuable than simply having a very large pool of users.

How to publish and host a MVC4 application in your domain?

I have a webdomain www.MyDomain.com and a MVC4 web application MyMVCWebApp.
First I publish the application to a local destination.
For instance: C:\TempLocation
And then I host it to my domain with a FTP-tool (FileZilla??)
The files will be hosted but I can't find the webpage.
Which url do I have to write?
http://www.MyDomain.com/MyMVCWebApp/Home/Index.chtml or something?!
Do I have change the settings in my web.config?
What do I have to do?
You can't host an application on a domain.
An application is hosted on a web server. A domain name is only a way to translate an easy to remember address like "www.google.com" to the web server ip address which looks like 173.194.66.104
It is possible to purchase a domain without a web server.
So before going further:
Check if you actually bought a domain only, or a domain with a server
Your domain should redirect to your server ip address, you can see if he is correctly configured by opening a command prompt and doing
C:\> ping www.yourdomain.com
If this is not the case you will need to update the A record of your domain, and wait for the update to be replicated on DNS server worldwird.
If you have a managed server, you should check your hosting provider website. They usually provide in depth documentation, and they all have a different way to do things. Most of the time indeed you will be able to upload your files using a FTP software such as Filezilla.
However, in order to host a MVC 4 application you need a server with
the IIS web server, which means that you need a Windows server. So if
you have a Linux server, you should contact your hosting provider
support and tell them you made a mistake during your order. (It is
possible to host a MVC 4 application on Linux, but I don't think it
is often provided on managed servers)
If you have a dedicated server you are on your own.
The URL you will have to write to access your application will depends on what you have configured in the RegisterRoutes method of the RouteConfigs.cs file.
I recommend you to watch the last video on this page to have a better overview of the possibilities.

iis bindings on shared server

I have a scenario where I have many domains (could be hundreds) pointing to my one web application for example
site1.com
site2.com
site3.com
.... etc
All point to my single web app, this app will be in a shared hosting environment.
The only way I can think of configuring these bindings in IIS is to send my shared hosting company an email every time I need a new binding. Is there a better way? for example some how sending all host headers to my site? How do I do that?
You probably need your own IP address to do this. Then you could just not specify a Host name in your binding:
You will need to specify your IP in the binding though.
This means that as long as your DNS points all sites to your IP, then your site should respond.

ASP.NET MVC Background Service processing

We run a number of ASP.NET MVC sites - one site per client instance with about 50 on a server. Each site has its own configuration/database/etc.
Each site might be running on a slightly different version of our application depending on where they are in our maintenance schedule.
At the moment for background processing we are using Quartz.net which runs in the app domain of the website. This works well mostly but obviously suffers issues like it isn't running when the appdomain shuts down such as after prolonged activity.
What are our options for creating a more robust solution?
Windows Services are being discussed but I don't know how we can achieve the same multi-site on different versions we get within IIS.
I really want each IIS site to have its own background processing which always runs like a service but is isolated in the same way an IIS site is.
You could install a windows service for each client. That seems like a maintenance headache though.
You could also write a single service that understands all versions of your app, then processes each client one after the other.
You could also create sql server jobs to do this, and setup one job for each customer.
I assume that you have multiple databases for each client as you mentioned.
Create a custom config file with db connection-strings for each client.
e.g.
<?xml version="1.0" encoding="utf-8" ?>
<MyClients>
<MyClient name="Cleint1">
<Connection key="Client1" value="data source=<database>;initial catalog=client1DB;persist security info=True;user id=xxxx;password=xxxx;MultipleActiveResultSets=True;App=EntityFramework" />
</MyClient>
<Myclient name="client2".....></MyClient>
</MyClients>
Write a windows service which loads the above config file (clients|connectiostring) into the memory and iterate through each client database config. I guess all clients have similar job queuing infrastructure, so that you can execute same logic against each database.
You can use a Mutex to initiate the windows service for a client so that it will make sure two instances of the service for the same client won't run at the same time.

SharePoint Search with Network Load Balancing (NLB)

SharePoint MOSS 2007 on 64 bit OS and SQL. Added a new Web Front End to our farm, all sites seem to work fine - but now we've noticed that the search service has completely stopped working. It works if I change my host file to point to the original WFE, but if I use the NLB IP or the IP of the new WFE, it says "Unable to Connect to the Search Service.
Help.
As someone who was recently a sharepoint developer one of the biggest issues I have come across is load balanced environments. Does your alternate access mapping file contain the proper references?
The way we do it is to keep one WFE outside the NLB and have the indexer use that machine. Not only is this better for performance (the separate WFE serves the indexer only, regular traffic goes through the NLB. This way, indexing doesn't interfere with regular users visiting the site)
The other pro is that you circumvent issues like this.
P.S. This question DOES belong on serverfault though, voted to move..
I just setup a new medium farm MOSS 2007 x64 environment and ran into a few snags. This is what we ended up doing:
2 WFEs, 1 Index, 1 SQL - all running Windows 2008 Server
The 2 WFEs have an NLB cluster configured and host queries (but not indexing).
The Index is also a WFE, is NOT part of the cluster, hosts indexing (but not queries).
The Index had to have the loopbackcheck disabled and a hosts file entry setup to point the portal DNS name to 127.0.0.1. Without those settings it was generating errors. With these settings, it can index itself without affecting the portal performance while still being able to replicate its index to the query servers.
Hope that helps.

Resources