Geolocation with wordpressmu/buddypress - geolocation

I am just looking for the best solution for the following problem.
I have installed wordpress mu, and I wanted to create child blogs, for different areas in the world. But I want it so 1 domain can switch them instantly using the users ip address.
IS there a extention of wordpressmu or buddypress or do I need something on the server say in htaccess to do that?

Check this plugin out: http://wordpress.org/extend/plugins/ip-to-country/
You could possibly use this, check the value returned, and set a WordPress cookie for the user that automatically redirects them to the geo-specific blog and will do so if/when they return to the site.

Check out the Google API for geocoding, user detection, and geogrouping options (redirect all users from combined location like 'Europe'). Rather then doing all the heavy lifting on your server, work with the Google API, get your nicely formatted answer back, and redirect based on that....
Anything that involves geographical relation .... GOOGLE GOOGLE GOOLE

Related

Google YOLO stop working : The client origin is not permitted to use this API

I assume it has something to do with this:
For me Google one Tap stopped working on all my sites that previously worked. I added API HTTP refer to restriction in console.developer.com, but I still get a warning message "The client origin is not permitted to use this API." any thoughts? If you go to the page https://www.wego.com/ you can see that Google one tap still works...
https://news.ycombinator.com/item?id=17044518#17045809
but Google YOLO stop working for everyone. I use it like many people for login and it just stop work.
My domain are obviously added on console.developers.google.com
Any ETA for fix this? Some information would be great for people who rely on it.
Google YOLO is not disabled. It is open to a small list of Google Partners.
The reason you were able to access it earlier was because it was open for a short period of time but the whitelist is now readded/enabled.
Reference:
https://twitter.com/sirdarckcat/status/994867137704587264
Google YOLO was put on whitelist after a client-side exploit became clear to google.
People could cover the login button of the prompt with something like a cookie consent (which we all know people automatically accept).
Therefor people could easily steal their gmail or other details due to this google decided to put it on whitelist and review the sites that are using this technology in order to ensure that they are using it as they should.
Google retroactively labeled One-Tap as a "closed beta".
https://developers.google.com/identity/one-tap/web
The beta test program for this API is currently closed. We are improving the API's cross-browser functionality and will provide updates here in the coming months.
The link for the entire project is currently 404, but the beta statement is visible on the wayback machine.

Google script origin request url

I'm developing a Google Sheets add-on. The add-on calls an API. In the API configuration, a url like https://longString-script.googleusercontent.com had to be added to the list of urls allowed to make requests from another domain.
Today, I noticed that this url changed to https://sameLongString-0lu-script.googleusercontent.com.
The url changed about 3 months after development start.
I'm wondering what makes the url to change because it also means a change in configuration in our back-end every time.
EDIT: Thanks for both your responses so far. Helped me understand better how this works but I still don't know if/when/how/why the url is going to change.
Quick update, the changing part of the url was "-1lu" for another user today (but not for me when I was testing). It's quite annoying since we can't use wildcards in the google dev console redirect uri field. Am I supposed to paste a lot of "-xlu" uris with x from 1 to like 10 so I don't have to touch this for a while?
For people coming across this now, we've also just encountered this issue while developing a Google Add-on. We've needed to add multiple origin urls to our oauth client for sign-in, following the longString-#lu-script.googleusercontent.com pattern mentioned by OP.
This is annoying as each url has to be entered separately in the authorized urls field (subdomain or wildcard matching isn't allowed). Also this is pretty fragile since it breaks if Google changes the urls they're hosting our add-on from. Furthermore I wasn't able to find any documentation from Google confirming that these are the script origins.
URLs are managed by the host in various ways. At the most basic level, when you build a web server you decide what to call it and what to call any pages on it. Google and other large content providers with farms of servers and redundant data centers and everything are going to manage it a bit differently, but for your purposes, it will be effectively the same in that ... you need to ask them since they are the hosting provider of your cloud content.
Something that MIGHT be related is that Google rolled out some changes recently dealing with the googleusercontent.com domain and picassa images (or at least was scheduled to do so.) So the google support forums will be the way to go with this question for the freshest answers since the cause of a URL change is usually going to be specific to that moment in time and not something that you necessarily need to worry about changing repeatedly. But again, they are going to need to confirm that it was something related to the recent planned changes... or not. :-)
When you find something out you can update this question in case it is of use to others. Especially, if they tell you that it wasn't a one time thing dealing with a change on their end.
This is more likely related to Changing origin in Same-origin Policy. As discussed:
A page may change its own origin with some limitations. A script can set the value of document.domain to its current domain or a superdomain of its current domain. If it sets it to a superdomain of its current domain, the shorter domain is used for subsequent origin checks.
For example, assume a script in the document at http://store.company.com/dir/other.html executes the following statement:
document.domain = "company.com";
After that statement executes, the page can pass the origin check with http://company.com/dir/page.html
So, as noted:
When using document.domain to allow a subdomain to access its parent securely, you need to set document.domain to the same value in both the parent domain and the subdomain. This is necessary even if doing so is simply setting the parent domain back to its original value. Failure to do this may result in permission errors.

Verifying Googlebot in Rails

I am looking to implement First Click Free in my rails application. Google has this information on how to verify a if a googlebot is viewing your site here.
I have been searching to see if there is anything existing for Rails to do this but I have been unable to find anything. So firstly, does anyone know of anything? If not, could anyone point me in the right direction of how to go about implementing what they have suggested in that page about how to verify?
Also, in that solution, it has to do a lookup every time to try and detect google, that seems like its going to be a big performance hit if I have to do it every page load? I could cache the IP if it has been verified in the past but Google have stated that their IP's change so at some point it may no longer belong to them. Although it probably doesn't happen regularly so it may not be that big of an issue.
Many thanks!!
Check out the browser gem: https://github.com/fnando/browser
What I'd do is use the
browser.bot?
method to check if your site is being accessed by a bot or not. If you care about the Googlebot specifically, you could check if
browser.name
includes googlebot. Keep in mind that this gem just checks the user agent sent by the client's browser, which could of course be spoofed. Sounds like that isn't a huge concern for your purposes.
I've built a Ruby gem for that recently, it's called "legitbot".
You may learn if a Web request comes from a supported bot using
bot = Legitbot.bot(userAgent, ip)
"legitbot" does this looking into User-agent and searching for a bot signature, i.e. how bots identify themselves. This doesn't guarantee that the Web request IP really comes from e.g. Googlebot. To make sure it is, call
bot.detected_as # => "Google"
bot.valid? # => true
bot.fake? # => false
Supported bots are Googlebot, Yandex bots, Bing, Baidu, DuckDuckGo.

Using non Google Analytics tag in URL alongside regular Google Analytics tags

I'm having some issues with Google Analytics URL parameters. Prviously I've built URLs with the Google Analytics URL Builder. these have enabled me to track where visitors to my site have been coming from, how successful various marketing campaigns have been etc.
Recently, I've started using another tag in the URL, one which has nothing to do with Google Analytics, but acts to alter the telephone number on my site when the visitor arrives on it. For example, I'll add &ctcc=adwords onto the end of my tracking URL, and a specified phone number will appear on my site when the user comes through so I can track how many calls my adwords spend has generated.
However, when I've been using this ctcc code, Google Analytics no longer seems to be tracking the traffic numbers to my site :(
Any idea how I can incorporate the two parameters into the URl, and ensure that they both work as expected?
Thanks in advance
It looks like this is a problem with how your server is redirecting traffic with a ctcc query parameter. Look at the following request and its response headers:
So the ctcc parameter is used in some server side tracking (as best as I can tell), and the server is set up to redirect & strip ctcc whenever it gets a request with ctcc. Not being familiar with the system in use, I can't provide details, but you need to reconfigure the redirects to stop changing & into ;. It's the replacement of ampersands with semicolons that is messing up your GA data.

Google indexed my domain anyway?

I have a robots.txt like below but Google has still indexed my domain. Basically they've indexed mydomain.com but not mydomain.com/any_page
UserAgent: *
Disallow: /
I mean how can I go back further than / which I thought was the root of domain?
Note this domain is a work in progess, hence I don't want Google or any other search engines seeing it for a minute.
If you don't have one already, get a Google Webmaster Tools account. It includes a URL removal tool that may work for you.
This doesn't address the problem of search engines possibly ignoring or misinterpreting your robots.txt file, of course.
If you REALLY want your site to be off the air until it's launched, your best bet is to actually take it off the air. Make the site inaccessible except by password. If you put HTTP Basic authentication on your documentroot, then no search engine will be able to index anything, but you'll have full access with a password.

Resources