My ads were disapproved with DISABLED: third_party_redirect_on_landing_page, what does this mean? - google-ads-api

my client was told by google that certain of my links are "DISABLED: third_party_redirect_on_landing_page". Two of the links given are:
http://www.texasvistamedicalcenter.org/
https://www.saltlakeregional.org/services-directory/emergency-care
One of them effectively redirects to another page, but the other does not:
curl -I http://www.texasvistamedicalcenter.org/
HTTP/1.1 301 Moved Permanently
...
curl -I https://www.saltlakeregional.org/services-directory/emergency-care
HTTP/2 200
...
This makes me think that Google probably does not like some other URL linking to this page instead of being put directly in the ad.
My websearch for third_party_redirect_on_landing_page turned up empty handed. How can this be fixed?

You can't redirect to a domain that's different than the domain in your display URL. The domains in your display url and landing page must be the same.

Related

Changing the default 302 HTTP->HTTPS redirect to 301

Have a client running a website on Cloud Run and post-release 2 issues came up:
CR uses a default 302 redirect from HTTP to HTTPS - is there any way of changing those to 301 permanent redirects (temporary redirects are a rather poor choice for SEO)
I know it's not possible to remove the Cloud Run assigned URLs like *.a.run.app, but is there any hack for adding a noindex directive on them (either via meta tags or the HTTP response header - but those would need to apply to *.a.run.app URLs, and not the custom production domain)? Alternatively, adding a separate /robots.txt file only for *.a.run.app URLs?
I was told a dev tried adding code to noindex the *.a.run.app URLs but that did not work due to Cloud Run's limitation itself.
I would appreciate any help on these two.
I believe both of these can be achieved in your application code.
Check this article for HTTP redirection implementation.
As for noindex it should also be possible to determine request host and then respond with X-Robots-Tag header to disable indexing.

Why the content of youtube homepage opengraph image is a bad url?

I notice in the head section of www.youtube.com that the meta property="og:image" content is not a valid URL...
There I read: //s.ytimg.com/yts/img/youtube_logo_stacked-vfl225ZTx.png
A good one and the one which is readable by my browser is:
http://s.ytimg.com/yts/img/youtube_logo_stacked-vfl225ZTx.png
I am interesting to know why Google did this choice?
Thanks from France!
The url that Google chose is called a protocol-less url. Depending on which protocol you are accessing youtube.com (i.e. http or https), the protocol of the image will change. So if you check you can actually access the image both by http or https.
More interestingly even though facebook linter gives error, when you share the youtube.com home page on facebook, it actually picks the correct image.
Hope this helps.

Switching between http and https for images located on a sub-domain

My ASP.NET MVC3 site, www.mysite.com, pulls images from images.mysite.com. When I'm not logged into my site and using SSL, it works flawlessly. However, when logged in, it get the
Only secure content is displayed.
message in IE9. I understand that. What's the best way to deal with switching URL's for my images? Should I check to see if I'm currently using SSL and point my images to https://images.mysite.com, otherwise http://images.mysite.com?
EDIT: This is an e-commerce site, so most of the time the site is browsed unsecured. But after login, I still need to pull some of those same images, and of course if they browse back to a regular catalog page, it would need to access images. Perhaps I will just have to always use https://images.mysite.com. Just seemed like overkill.
I believe the problem only happens when you're in a secure page accessing content over http. So, for pages that can be seen both in http or https, might be as easy as always using https to get the images, regardless if you're in http or https.
You will always get that message if you are pulling content from a non-SSL site when viewing over SSL. If you site is mostly SSL protected, just always pull images from https://images.mysite.com as you do not get the error if you pull SSL content into a non-SSL site.
Otherwise, you will need to know which pages are only viewable over SSL and which ones are not, and link appropriately.
Lastly, if you site is available over both, you will probably need to look at the HTTPS server variable to determine if you are on SSL or not and use this to determine your link (http or https).
Did you try prefixing with ~instead of ../ or /?
This worked for me.

OAuth: how to test with local URLs?

I am trying to test OAuth buttons, but they all (Facebook, Twitter, LinkedIn) come back with errors that seem to signal that I can not test or use them from a local URL.
How do people usually work in development with OAuth stuff if they all seem to require a non-dev and non-local connections environments?
Update October 2016: Easiest now: use lvh.me which always points to 127.0.0.1, but make sure to verify that this is still true every time you need to invoke it (because domains can expire or get taken over, and DNS poisoning is always a concern)
Previous Answer:
Since the callback request is issued by the browser, as a HTTP redirect response, you can set up your .hosts file or equivalent to point a domain that is not localhost to 127.0.0.1.
Say for example you register the following callback with Twitter: http://www.publicdomain.com/callback/. Make sure that www.publicdomain.com points to 127.0.0.1 in your hosts file, AND that twitter can do a successful DNS lookup on www.publicdomain.com, i.e the domain needs to exist and the specific callback should probably return a 200 status message if requested.
EDIT:
I just read the following article: http://www.tonyamoyal.com/2009/08/17/how-to-quickly-set-up-a-test-for-twitter-oauth-authentication-from-your-local-machine, which was linked to from this question: Twitter oAuth callbackUrl - localhost development.
To quote the article:
You can use bit.ly, a URL shortening service. Just shorten the [localhost URL such as http//localhost:8080/twitter_callback] and register the shortened URL as the callback in your Twitter app.
This should be easier than fiddling around in the .hosts file.
Note that now (Aug '14) bit.ly is not allowing link forwarding to localhost; however Google link shortener works.
PS edit: (Nov '18): Google link shortener stopped giving support for localhost or 127.0.0.1.
You can also use ngrok: https://ngrok.com/. I use it all the time to have a public server running on my localhost. Hope this helps.
Another options which even provides your own custom domain for free are serveo.net and https://localtunnel.github.io/www/
Or you can use https://tolocalhost.com/ and configure how it should redirect a callback to your local site. You can specify the hostname (if different from localhost, i.e. yourapp.local and the port number). For development purposes only.
For Mac users, edit the /etc/hosts file. You have to use sudo vi /etc/hosts if its read-only. After authorization, the oauth server sends the callback URL, and since that callback URL is rendered on your local browser, the local DNS setting will work:
127.0.0.1 mylocal.com
Set your local domain to mywebsite.example.com (and redirect it to localhost) -- even though the usual is to use mywebsite.dev. This will allow robust automatic testing.
Although authorizing .test and .dev is not allowed, authorizing example.com is allowed in google oauth2.
(You can redirect any domain to localhost in your hosts file (unix/linux: /etc/hosts))
Why mywebsite.example.com?
Because example.com is a reserved domain name. So
there would be no naming conflicts on your machine.
no data-risk if your test system exposes data
to not-redirected-by-mistake.example.com.
You can edit the hosts file on windows or linux
Windows : C:\Windows\System32\Drivers\etc\hosts
Linux : /etc/hosts
localhost name resolution is handled within DNS itself.
127.0.0.1 mywebsite.com
after you finish your tests you just comment the line you add to disable it
127.0.0.1 mywebsite.com
Google doesn't allow test auth api on localhost using http://webporject.dev or .loc and .etc and google short link that shortened your local url(http://webporject.dev) also bit.ly :). Google accepts only url which starts http://localhost/...
if you want to test google auth api you should follow these steps ...
if you use openserver go to settings panel and click on aliases tab and click on dropdown then find localhost and choose it.
now you should choose your local web project root folder by clicking the next dropdown that is next to first dropdown.
and click on a button called add and restart opensever.
now your local project available on this link http://localhost/
also you can paste this local url to google auth api to redirect url field...
This answer applies only to Google OAuth
It is actually very simple and I am surprised it worked for me (I am still sceptical of what my eyes are seeing).
Apparently you can add localhost as a trusted domain on the Google Developer Console, since localhost is an exception for most rules as you can see here.
This can be done on this page under OAuth 2.0 Client IDs. Click edit and then add http://localhost:8000 or similar ports, and hit save.
It is crucial that you include http or https in the input box.
HTTP or HTTPS?
I am once again surprised that Google allows http, although do note that there is a minor security risk if your application has been released to production.
If you want to be extra cautious, you can choose to stick with https. This will require you to set up an SSL certificate on your localhost server.
This is easier than you think, since the SSL certificate needs not be valid. Many http servers should give you this option. You will have to click the "proceed anyway" button anyway in your browser to bypass the big red warning.
This is more secure than http since either a) users will see a big red warning if hackers are trying something phishy, or b) the only time they won't see this warning is if the user intentionally set up a self-hosted SSL certificate, in which case they probably know what they are doing (I suppose a virus could technically do this as well, but at that stage they've already gotten enough control of a user's system to do anything they want).
I ran into some issues with the tools mentioned in other answers such as http://tolocalhost.com not forwarding query parameters (not to mention you have to visit the page and configure it first, same case with https://thomasmcdonald.github.io/Localhost-uri-Redirector/) and http://lvh.me not being useful to me because I run a proxy on my local machine and need the public URL to point to a private URL like http://mywebsite.dev.
So I made my own tool that filled my needs and may fill yours:
https://redirectmeto.com
Examples:
https://redirectmeto.com/https://www.google.com/search?q=puppies
http://redirectmeto.com/http://localhost:4000/oauth/authorize
http://redirectmeto.com/http://client.dev/page
Another valuable option would be https://github.com/ThomasMcDonald/Localhost-uri-Redirector. It's a very simple html page that redirects to whatever host and port you configure in the UI.
The page is hosted on Github https://thomasmcdonald.github.io/Localhost-uri-Redirector, so you can use that as your OAuth2 redirect url and configure you target host and port in the UI and it will just redirect to your app
If you have a domain, you can create a subdomain that redirects to your local entry point, it works for me
I created a public subdomain : oauth-test-local.alexisgatuingt.fr that redirects you to http:127.0.0.1:8000/oauth/callback/google with the returned data
Taking Google OAuth as reference
In your OAuth client Tab
Add your App URI example(http://localhost:3000) to Authorized JavaScript origins URIs
In your OAuth consent screen
Add mywebsite.com to Authorized domains
Edit the hosts file on windows or linux Windows C:\Windows\System32\Drivers\etc\hosts Linux : /etc/hosts to add 127.0.0.1 mywebsite.com (N.B. Comment out any if there is any other 127.0.0.1)

google bot, false links

I have a little problem with google bot, I have a server working on windows server 2009, the system called Workcube and it works on coldfusion, there is an error reporter built-in, thus i recieve every message of error, especially it concerned with google bot, that trying to go to a false link, which doesn't exist! the links looks like this:
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=282&HIERARCHY=215.005&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=145&HIERARCHY=200.003&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=123&HIERARCHY=110.006&brand_id=xxblpflyevlitojg
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=1&HIERARCHY=100&brand_id=xxblpflyevlitojg
of course with definition like brand_id=hoyrrolmwdgldah or brand_id=xxblpflyevlitojg is false, i don't have any idea what can be the problem?! need advice! thank you all for help! ;)
You might want to verify your site with Google Webmaster Tools which will provide URLs that it finds that error out.
Your logs are also valid, but you need to verify that it really is Googlebot hitting your site and not someone spoofing their User Agent.
Here are instructions to do just that: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
Essentially you need to do a reverse DNS lookup and then a forward DNS lookup after you receive the host from the reverse lookup.
Once you've verified it's the real Googlebot you can start troubleshooting. You see Googlebot won't request URLs that it hasn't naturally seen before, meaning Googlebot shouldn't be making direct object reference requests. I suspect it's a rogue bot with a User Agent of Googlebot, but if it's not you might want to look through your site to see if you're accidentally linking to those pages.
Unfortunately you posted the full URLs, so even if you clean up your site, Googelbot will see the links from Stack Overflow and continue to crawl them because it'll be in their crawl queue.
I'd suggest 301 redirecting these URLs to someplace that make sense to your users. Otherwise I would 404 or 410 these pages so Google know to remove these pages from their index.
In addition, if these are pages you don't want indexed, I would suggest adding the path to your robots.txt file so Googlebot can't continue to request more of these pages.
Unfortunately there's no real good way of telling Googlebot to never ever crawl these URLs again. You can always go into Google Webmaster Tools and request the URLs to be removed from their index which may stop Googlebot from crawling them again, but that doesn't guarantee it.

Resources