I'm using a domain name with this general structure: http://mydomainname.com/
However, when I click it, I get a 404 message saying:
And when I look in the URL, it's not http://mydomainname.com/ but surprisingly http://mydomainname.com/YkPWZ/.
How did YkPWZ/ appear automatically and what can I do to eliminate this issue? Sometimes accessing http://mydomainname.com/ works fine, but most of the time the browser automatically tacks on some random characters at the end of the URL, throwing the 404 message. This is not a browser-specific issue and I've had a few colleagues replicate this issue on different operating systems (both desktop and iOS).
P.S. If it matters at all, I generated my website using Github Pages (markdown files, not HTML).
I'm quite certain this is an issue on the GoDaddy side of things, though I'm unable to find any official documentation on the subject. As noted in comments above, the redirect isn't coming from GitHub Pages.
I found an old thread discussing the issue. Here is a brief summary:
GoDaddy may use redirects like this to handle load balancing on their shared hosting servers.
In several cases, users contacted GoDaddy to ask about the problem and
had the issue resolved, but
were never told the technical specifics of what was happening.
If you wish to stay with GoDaddy I recommend contacting them and sending them to the link I found above. They may be able to resolve the issue for you, though I wouldn't expect an explanation.
Alternatively, you can use another web host. In many circles, GoDaddy isn't rated very highly. It's lucky that there are so many web hosts to choose from. Alternatively, you can use a custom domain directly with GitHub Pages, bypassing a third-party host entirely.
Related
This is either a problem that Google is inflicting upon me, or a problem I am inflicting upon myself. I'm not totally sure.
When I first created my website a couple years ago, it followed a path similar to: http://www.mywebsite.abc123.com
Now, after a change in hosting services, I changed my domain to simply: https://www.mywebsite.com
I also added an SSL certifcate at the time for what it's worth. And it's been almost six months. I have all the variations (past and present) of my website registered and verified with Google's search console, but I can see no reason why the http://www.mywebsite.abc123.com link is getting indexed over the https://www.mywebsite.com link. I actually just assumed that http://www.mywebsite.abc123.com wouldn't even work anymore.
I've read about 301 redirects and it looks like something like that would solve my problem, but upon trying to implement it, I was confronted with nothing but a "Too many redirects" error.
Long story short, Google won't index my newer better URL.
But Yahoo and Bing will.
301 redirects have to be set up in the old domain so that it will point to the new one. If you still have access to that domain, you can add the redirects via .htaccess or in the admin panel.
I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.
I have a very basic intranet site for our company, and it's main purpose is to link to SMB shares on our network, so people can open files and edit them, without the need to then reupload to the site.
What I have, is a basic < a href="\IP ADDRESS\SHARENAME\">< /a>
The issue seems to be, regardless of whether I use the IP address, or the actual DNS name of the machine, IE9 always seems to think the intranet is an internet site, and stops these links from working.
Let's say for example, the web server address is 10.1.3.81, and I have a share on that same server for a global phone directory spreadsheet. I want someone to be able to click on the link on the page, and have it open that file directly.
So for the href, I put in \\10.1.3.81\intranet\phone directory\list.xls
Or something like that. IE9 (which is what all our users are using), considers this link to point to file://10.1.3.81/intranet/phone directory/list.xls
That's great, but as it doesnt consider this to be on the intranet, it blocks the file:// protocol, and the link does nothing.
If I add the site to my trusted sites list, it then works correctly. So I am wondering if there is a way on the programming side of things, that will let me create these kind of links and have them auto picked up as an intranet link?
Failing that, I will post on serverfault, and see if someone can guide me on applying a policy to add this site to trusted sites for all users and computers.
Many thanks
Eds
As it turns out, I was accessing the intranet by using either the FQDN or the IP address of the server.
As this article shows, http://support.microsoft.com/kb/303650 , if I just use the server name instead, and drop the domain name from the end, the links behave as I would like.
Sorry for this useless question.
Thanks, Eds
I have a little problem with google bot, I have a server working on windows server 2009, the system called Workcube and it works on coldfusion, there is an error reporter built-in, thus i recieve every message of error, especially it concerned with google bot, that trying to go to a false link, which doesn't exist! the links looks like this:
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=282&HIERARCHY=215.005&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=145&HIERARCHY=200.003&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=123&HIERARCHY=110.006&brand_id=xxblpflyevlitojg
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=1&HIERARCHY=100&brand_id=xxblpflyevlitojg
of course with definition like brand_id=hoyrrolmwdgldah or brand_id=xxblpflyevlitojg is false, i don't have any idea what can be the problem?! need advice! thank you all for help! ;)
You might want to verify your site with Google Webmaster Tools which will provide URLs that it finds that error out.
Your logs are also valid, but you need to verify that it really is Googlebot hitting your site and not someone spoofing their User Agent.
Here are instructions to do just that: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
Essentially you need to do a reverse DNS lookup and then a forward DNS lookup after you receive the host from the reverse lookup.
Once you've verified it's the real Googlebot you can start troubleshooting. You see Googlebot won't request URLs that it hasn't naturally seen before, meaning Googlebot shouldn't be making direct object reference requests. I suspect it's a rogue bot with a User Agent of Googlebot, but if it's not you might want to look through your site to see if you're accidentally linking to those pages.
Unfortunately you posted the full URLs, so even if you clean up your site, Googelbot will see the links from Stack Overflow and continue to crawl them because it'll be in their crawl queue.
I'd suggest 301 redirecting these URLs to someplace that make sense to your users. Otherwise I would 404 or 410 these pages so Google know to remove these pages from their index.
In addition, if these are pages you don't want indexed, I would suggest adding the path to your robots.txt file so Googlebot can't continue to request more of these pages.
Unfortunately there's no real good way of telling Googlebot to never ever crawl these URLs again. You can always go into Google Webmaster Tools and request the URLs to be removed from their index which may stop Googlebot from crawling them again, but that doesn't guarantee it.
I am completely new to ruby and I inherited a ruby system for a product catalogue. Most of my users are able to view everything as they should but overseas users (specifically Mexico) cannot contact the server once logged in. They are an active user. I'm sorry I cannot be more specific, and the system is private so I cannot grant access.
Has anyone had any issues similar to this before? Is it a user-end issue or a system error?
Speaking as somebody who regularly ends up on your user's side of the fence, the number one culprit for this symptom is "Clueless administrator". There are many, many sites which generically block either large blocks of IP space or which geolocate and carve out big portions of the world.
For example, a surprising number of American blogs block Asian countries (including Japan) out of a misplaced effort to avoid DDOS attacks (which actually probably originated in Russia or China but, hey, this species of administrator isn't very good on fine tuning solutions). I have to hop over to my American proxy server to access those sites.
So the first thing I'd do to diagnose your problems is to see whether your Mexican users are making it to the server at all, or whether they're being blocked somewhere earlier (router? firewall? etc). Then, to determine whether the problem is on your end or their end, I'd try to replicate the issue with you proxying your connection through a Mexican proxy and repeating the actions they took to cause the issue.
The fact that they get blocked after logging in could indicate that you have https issues , for example with an HTTPS accelerator installed [1], or it could be that your frontend server is properly serving up the static content but doing the checking on dynamic requests only.
[1] We've seen some really weird bugs at work caused by a malfunctioning HTTPS accelerator.
If it's working for everyone else then it would appear that the problem is not with Ruby or Rails working, since they are...
My first thought would be to check for a network issue: are the Mexican users all behind the same proxy server and/or firewall?
Is login handled within the Rails application or via some other resource? Can you see any evidence that requests from Mexican users are reaching your web server at all?
Login is handled by the rails app. Am currently trying to hunt down the logs, taking some time as again I am new to this system.
Cheers guys
Maybe INS is cracking down on cyber-immigration.