Adding binding hostname in IIS to website breaks site - binding

I'm baffled by an issue with my site in IIS 7 - it works fine when it's the only site that's running, and no binding is being used. I have my domain name www.mysite.com registered with the DNS to point to my IP and it pulls up my site great. However, now I want to add a second site in IIS, so I have to start using the binding host names, but when I put www.mysite.com in as the host name, suddenly I can't pull up the site anymore! And, I don't know if this information is helpful, but if I start up the Default Web Site (that has no bindings) along side my current website that I've added the binding to, the domain name will pull the default web site (IIS7 jpeg) instead! It's like it completely doesn't see the binding I've added, other than the fact that it allows me to run both sites at the same time. It doesn't take time for the binding to take effect in IIS right? Waiting until tomorrow won't make a difference I imagine?
I've done this many times before on other servers, and as far as I can remember, I seem to be doing everything the same as before. I can't figure out why it's not working this time. Am I forgetting something crucial here?
The only difference I can tell is that this domain is hosted with Google, which requires me to update the DNS myself. All those other times, I've called up goDaddy and said, 'Hey, can you point this domain to this IP address?' and they do it for me. Is it possible that I've updated the DNS incorrectly, or incompletely? I don't know if that can be the case if the domain has been pulling up the website just fine all this time until I tried to add the second website on the server... Any ideas?
BTW, this is a virtual server (2008 R2) hosted at atlantic.net - I've never used them before. All the other virtual servers I've used have been with 1and1. Don't know if that makes a difference too.

Ok, I figured it out - turns out the DNS WAS set incorrectly after all! I was able to narrow down the issue when I started creating a bunch of dummy sites and playing with my local computer's hostfile and saw that everything seemed to be routing just fine. So, clearly IIS was working as expected. So, I then called GoDaddy repeatedly until I found someone who agreed to help me with a Google domain. Apparently, from what I could understand, I set the IP address in the wrong place which only made the domain forward as the IP address, so there was no hostname for IIS to match it to, which is why it pulled up the default website (no bindings) with no problems, and ONLY the default website! He directed me to another screen (the DNS Zone File tab) where I could set it properly. Now, everything goes to the expected sites!
Hopefully this will help another newbie out one day when they have to set their own DNS (gasp!)

Related

I want to access Jira (Docker on Synology DS716+II) from LAN not only via IP_OF_SYNOLOGY:PORT but for example jira.synology.local

I am working with a Synology NAS type aDS716+II, DSM 6.1.4-15217 Update 2 on wich runs Docker with a Jira container.
So now what I want to do I'm assinged to get to work is to access Jira's webinterface with let's say jira.synology.local with synology being the servername.
I read a lot about nginx and how it's built in since DSM 6.X but I don't seem to get it to work properly at all.
I can access Jira's webinterface from another machine within the LAN via IP_OF_SYNOLGY:PORT so when setting up a reverse proxy on the server it should be pointing to LOCALHOST:PORT right? I have also tried using the actual IP instead of LOCALHOST but without success.
I can access the interface of Synology itself not only via IP_OF_SYNOLGY:PORT but also via DOMAINNAME.LOCAL if I set the domain name.
I really don't know what I'm missing and I tried everything I could think of. Does someone has experience with this?
If some information is missing, I'll gladly provide it. I'm fairly new to synology I have to admit. Thanks in advance!
So this has gotten zero response but I figured probably someone will have a similar "problem" in the future, so I will answer anyway.
I solved everything, when I setup Active Directory. When installing AD, the DNS-Server will automatically be installed too.
So we have JIRA running in a Docker container (on port, let's say, 12345) and I want to access it via the LAN on jira.domainname.
To do so we need to have installed DSM6.X or higher (for nginx) and the DNS-Server. That's it.
In the DNS-Server you will have to create a new master zone
and apply the following settings, whereas you can freely choose the domain name and Master DNS server must be the IP of your synology station, since it functions as a DNS
Then you want to edit the Resource Record
There you want to add an A Record Resource
and an CNAME Record Resource
So your Resource Records will look like this
Now the last step for setting up the DNS server is to tell it what to do if there is no specific record for a query. So for example if you want to open jira.domainname in your browser, there is a specific record for that and the DNS server knows how to direct it. But if you want to open up for example google.com the DNS server has no information on that and does now know what to do. So what we do now is to to tell the DNS server to forward the request, if it has no records for a request. To do so, enable the forwarders and put in the IP of your gateway/ managed switch as primary and some public DNS server (8.8.8.8 for one of google's DNS server) as secondary.
Please remember that jira.domainname shall always be the domainname you choose and 192.168.0.200 shall always be the IP of your synology station.
So now the DNS server is completely setup. Now we want to take advantage of the built-in reverse proxy (which runs on nginx in the background). To do so we navigate as seen here
and create a new reverse proxy rule
So now that the URL's can point to the same destination (your synology, 192.168.0.200) but on different Port. That comes in very handy for some applications running in docker.
So now if you are running this in an home setup or small office, you probably are working with standard issue commercial router such as for example a FritzBox by AVM. Those are pretty good but beware that some prohibit the so called DNS Rebinding which means that DNS requests pointing to a local IP will be not allowed. Since in this setup the DNS server (your synology) and the destination JIRA (also your synology) are in the same LAN, we have to create an exception. Probably other routers don't suppress those requests, but if so exceptions are necessary.
So the next step, it to tell your Gateway or managed switch that it has to use the newly setup DNS server as the primary DNS server. For FritzBox' you can do so here
put in the IP of your DNS server and an secondary DNS server. This is important as a fallback solution if your DNS server probably stops working at some point.
Now that everything is setup I would recommend to restart the router/ managed switch, synology and the workstation you are working on, to flush all caches. After that you can simply open your browser and type in jira.domainname and JIRA should open up. You can also open a terminal/ cmd and type in nslookup jira.domainname to see if it is being resolved correctly.
I really hope this will help someone at some point and if there are any additional questions, please feel free to comment this or write me directly!

using \\servername\sharename in IE9 not being picked up as intranet site

I have a very basic intranet site for our company, and it's main purpose is to link to SMB shares on our network, so people can open files and edit them, without the need to then reupload to the site.
What I have, is a basic < a href="\IP ADDRESS\SHARENAME\">< /a>
The issue seems to be, regardless of whether I use the IP address, or the actual DNS name of the machine, IE9 always seems to think the intranet is an internet site, and stops these links from working.
Let's say for example, the web server address is 10.1.3.81, and I have a share on that same server for a global phone directory spreadsheet. I want someone to be able to click on the link on the page, and have it open that file directly.
So for the href, I put in \\10.1.3.81\intranet\phone directory\list.xls
Or something like that. IE9 (which is what all our users are using), considers this link to point to file://10.1.3.81/intranet/phone directory/list.xls
That's great, but as it doesnt consider this to be on the intranet, it blocks the file:// protocol, and the link does nothing.
If I add the site to my trusted sites list, it then works correctly. So I am wondering if there is a way on the programming side of things, that will let me create these kind of links and have them auto picked up as an intranet link?
Failing that, I will post on serverfault, and see if someone can guide me on applying a policy to add this site to trusted sites for all users and computers.
Many thanks
Eds
As it turns out, I was accessing the intranet by using either the FQDN or the IP address of the server.
As this article shows, http://support.microsoft.com/kb/303650 , if I just use the server name instead, and drop the domain name from the end, the links behave as I would like.
Sorry for this useless question.
Thanks, Eds

Subdomains and locally installed Rails app

I can't figure out what I'm overlooking, perhaps it's obvious or lack of understanding.
The app I'm working with uses subdomains which on the hosting server work properly. I figured locally installing would kick up some issues around routing, so I read up on making changes to /etc/hosts and using the Ghost gem. Both seem to work fine i.e. localhost:3000/ becomes myapp.local:3000 but I don't understand how to go about logging into a subdomain account. Here's an example...
myapp.local:3000/session/new = the default login page for the app
myapp.local:3000/signup = default signup page
I can create an account here e.g. Sub1
The thank you page is shown w/ the reference to sub1.myapp.com which points to the hosted app (the local db shows this domain as well)
sub1.myapp.local manually added to /etc/hosts and dscacheutil -flushcache
sub1.myapp.local:3000/session/new is the subdomain
login attempts return that this isn't a valid domain. This seems to make sense because the local db shows the url as sub1.myapp.com on the hosting server.
So my question is whether there's a local workaround that I can use for development or have I totally missed a fundamental concept along the way?
you might just want to try putting the actual dot com in your /etc/hosts file.
ie:
127.0.0.1 sub1.myapp.com
127.0.0.1 myapp.com
127.0.0.1 anyothersubdomains.myapp.com
what this usually does is trick your computer into thinking it is the host of all of those, so you can't go to the real site anymore in a web browser.
if you do want it to be .local, presumably so that you can refer to the real online site while working on a local copy, you should probably take a look in app/controllers/application_controller.rb (sometimes application.rb) and look for logic in there that helps determine what to do depending on the subdomain. maybe its hard coded to only look for a .com or something.
If you are using the webrick server or something like Puma for development you can use lvh.me to access your subdomains. e.g.
http://sub.lvh.me:3000/
http://lvh.me:3000/ is equal http://localhost:3000/

SEO Destroyed By URL Forwarding - Can't figure out another way

We design and host websites for our clients/sales force. We have our own domain: http://www.firstheartland.com
Our agents fill out a series of forms on our website that are loaded into a database. The database then renders the website as a database driven website.
/repwebsites/repSite.cfm?link=&rep=rick.higgins
/repwebsites/repSite.cfm?link=&rep=troy.thompson
/repwebsites/repSite.cfm?link=&rep=david.kover
The database application reads which "rep" the site is for and the appropriate page to display from the query string. The page then outputs the content and the appropriate CSS to style the page and give it its own individual branding.
We have told the user to use Domain Name Forwarding to get the users to their spot on our server. However, everyone seems to be getting indexed under our domain instead of their own. We could in theory assign an new IP to them, the cost is not the issue.
The issue is how we would possibly accomplish this.
With all of that said, them being indexed under our domain would still be OK as long as they would actually show up high in the ranking for their search term.
For instance, an agent owns TroyLThompson.com. If I search Troy L Thompson, It does not show up in my search. Only, "troy thompson first heartland" works (they show up third)
Apart from scrapping the whole system, I don't know what to do. I'm very open to ideas.
I'm sure you can get this to work as most hosting companies will host hundreds of websites on a single server (i.e. multiple domains on one IP).
I think you need your clients to update the nameservers for their domains (i.e. DNS) to return the IP address of your hosting server. Then you need to configure your server to return the right website based on the domain that was originally requested.
That requires your "database driven website" to look in the HTTP request and check which domain was originally requested, then it can handle the request accordingly.
- If you are using Apache, see how to configure Apache to host multiple domains on one IP address.
- If you are using Microsoft IIS, maybe Host-Header Routing is what you need.
You will likely need code changes on your "database driven website" to cope with these changes.
I'm not sure that having a dedicated IP address per domain will help much, as then you have to find a way to host all those IP addresses from a single web server. However, if your web server architecture already supports a shared database and multiple servers, then that approach might work well for you, especially if you expect the load from some domains to be so heavy that you need a dedicated web server for them.
Google does not include URL in its index which return a 301 status code. The reason is pretty obvious on second thought, because the redirect tells Google "Whatever was here before has moved there, please update your references". One solution I can see is setting up Apache virtual hosts on your server for each external domain, and have each rep configure their domain's DNS A record to point to the IP address of your server.

Rails/Passenger/Apache: Simple one-off URL redirect to catch stale DNS after server move

One of my rails apps (using passenger and apache) is changing server hosts. I've got the app running on both servers (the new one in testing) and the DNS TTL to 5 minutes. I've been told (and experienced something like this myself) by a colleague that sometimes DNS resolvers slightly ignore the TTL and may have the old IP cached for some time after I update DNS to the new server.
So, after I've thrown the switch on DNS, what I'd like to do is hack the old server to issue a forced redirect to the IP address of the new server for all visitors. Obviously I can do a number of redirects (301, 302) in either Apache or the app itself. I'd like to avoid the app method since I don't want to do a checkin and deploy of code just for this one instance so I was thinking a basic http url redirect would work. Buuttt, there are SEO implications should google visit the old site etc. etc.
How best to achieve the re-direct whilst maintaining search engine niceness?
I guess the question is - where would you redirect to? If you are redirecting to the domain name, the browser (or bot) would just get the same old IP address and end up in a redirect loop.
If you redirect to an IP address.. well, that's not going to look very user friendly in someone's browser.
Personally, I wouldn't do anything. There may be some short period where bots get errors trying to access your site, but it should all work itself out in a couple days without any "SEO damage"
One solution might be to use Mod_Proxy instead of a rewrite to proxy traffic to the new host. This way you shouldn't see any "SEO damage".
I used rinetd to redirect the IP traffic from the old server to the new one on IP level. No web server or virtual hosts config needed. Runs very smoothly and absolutely transparent to any client.

Resources