Foursquare KML feed shows ONLY recent posts - url

For years, I had been using my Foursquare KML url to populate a Google Map that I then placed on my website.
A few months ago, I noticed that the feed would not load check-ins in the feed for earlier events and instead only the recent check-ins. The result has been a near empty map that goes back only a few months rather than 4 years.
I've checked with support many times over the past 6-7 months and the response is always quite general. Nobody has actually looked at my account to see what the issue is.
And the last response was to come here and ask you all.
So that's my issue:
The Foursquare KML feed only will populate a map with recent check-ins.
I've refreshed the feed URLs on the foursuare.com/feeds page and tried with that new url
When I go into my Foursquare.com/user/history page on a desktop, all my check-ins are there and are populated on that map which uses Mapbox/OpenStreetMaps.
If anyone has any advice as to how to get this problem fixed, please let me know. And if someone knows of a work-around to instead get whatever feed Foursqure is using to populate Mapbox/OpenStreetMaps, that would work great too!
Thanks again!

Have you tried adding ?count=500 to your KML feed URL? By default it seems limited to the last 10.

Related

Google API (youtube search) always sorts results by relevance instead of date

I use Google API to get a JSON result of my own YouTube channel videos.
The URL has not changed, but suddenly, Google returns the JSON only with order=relevant instead of order by date.
https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UCqEcEGQ0sG89j7ZhOdgFOyg&maxResults=24&order=date&type=video&videoType=any&key=<key>
The first result returned will be a 2 year old video instead of yesterday's one.
This call did work 'till last week (week 11 2016)
From this thread, it seems it is a bug and that the YouTube team is aware of it. A Google employee responded 3 days ago with:
YouTube is aware the search/sorting functions aren't working as
expected – this is temporary and part of our efforts to better
respond, review and remove graphic, violative content from YouTube.
Thanks for your patience while we work through this. Will update this
thread when these features are working normally again, feel free to
subscribe for updates.
You can subscribe to the thread for faster updates.

SEO: How to get rid of the webpages titles below the main link url on Google

Recently I changed my website which now is hosted on a different server (the previous server hosted by another company is not available anymore).
Everything is different on my new website including the content, the layout, the design and, most important here, the url's format.
The only thing I kept is my domain name which has been redirected toward the new server.
Keeping the same domain name is the issue:
The problem is that when I make a search about my website on Google, the main link displayed is ok but below this link, there are 4 titles corresponding to 4 sections of my previous website.
Clicking on them will lead to previous url's that don't exist anymore.
You get a kind of cached result with no css and the users are complaining a lot about that.
I opened an account on "Google webmasters" and I declared a brand new sitemap.xml and asked for a new Googlebots crawling (three times already).
It's been a week now and the titles below the general link on Google remain the same.
How can I get rid of these?
On "Google webmasters" I tried to "ban" the url's of these titles. It kinda works but not as I expected: The titles remain there but there is no more description below them (which doesn't solve my issue, it just makes it uglier).
Another difference is that one of these links finally disappeared … but another outdated section link has taken its place. It can go like this forever as there are too many possible links to ban with no certainty of result.
What I would like is just keep the main link on Google and get rid of these "sub" titles. At least the old ones.
PS: I never asked for these titles in the first place. they just appeared a long time ago.
I don't mind getting the new sections there but certainly not the old ones.
Thank you for your help.
First of all, I would block the old site's content with a robot.txt to prevent any crawls by cached sitemap.xmls.
If you have a site with a decent amount of SEO traffic, I'd create an htcaccess 301 redirect of the most important results.
It can take some time until your de-indexing will start. I've waited about 2 weeks.

twitter api 1.1 url count alternative

I've been using the old url api(v1) to get the count of a given url, lately I needed to get also the re-tweets and started searching about that.
this is the exact url I'm using right now:
http://urls.api.twitter.com/1/urls/count.json?url=http://google.com
As I viewed with some reading the v1 api is deprecated but at least it's still working.
I found some questions on the dev page of twitter:
https://dev.twitter.com/discussions/12643
those are a little old questions and have no specific solving to the problem. I mean, the most near solution was using the search api(search/tweets) which could be good but not a exactly replacement for the urls/count method.
Please note that Twitter's search service and, by extension, the
Search API is not meant to be an exhaustive source of Tweets. Not all
Tweets will be indexed or made available via the search interface.
also it has a limit for 100 results at maximum per 'page', even it throws the link to get the next set of objects, thats good but when the search reaches 1 million of results I'll need to get page over page to now how much tweets I got and having to do to much request to the api...
I sought some question over the dev page on twitter suggested using the stream api, I've tried using (statuses/filter) but that don't work very well given a URL as track param(which they said that is the keyword to track).
So, anyone who's been using the old urls/count has found a reliable alternative with the new apiv1.1, especiffically to get the tweets and re-tweets for a given url ?
The official suggestion by Twitter staff is that either the search/tweets endpoint (having just the last 7 days data) or the Streaming API be used (handling yourself the counters, making everything just too complicated for a d*mn counter).
As an extra warning, the old endpoint (http://urls.api.twitter.com/1/urls/count.json?url=YOUR_URL) will stop working on November 20th, and according to this blog post from Twitter there are no plans to replace it with anything in the short term and they are even removing the count from their own buttons.

Not all the comments visible when pulling a post using Facebook graph API iphone

I searched through all the posts about Facebook graph API and didn't find anything about it. Here's the issue.
I'm working on the iPhone app for one company. And for the news section in this app, i'm pulling all the posts and comments from the wall of this company's Facebook page using Facebook graph API.
The way i do this is: first i pull all the posts by sending request:
[facebook requestWithGraphPath:#"company name/feed" andDelegate:self];
And I receive the NSDictionary with all posts and information about it, including number of comments. I put all the post in tableView and when you tap one of the posts viewController of comments opens. Where i requesting the comments for this post:
NSString *postId = [self.post objectForKey:#"id"];
NSString *request = [NSString stringWithFormat:#"%#/comments" ,postId];
[facebook requestWithGraphPath:request andDelegate:self];
I'm receiving the array of comments. But some of them are missing. I guess it because of some privacy setting people have in their accounts.
I'm just wondering maybe someone had the same issue and know how to work this around. Or know what privacy settings user need to change in his facebook account to be able to see his comments.
Thanks.
Just wanted to add this works: Grab the feed for all the basic wall post information, then grab the comments for each post individually. Requires more complex refresh methods, and a little trickery (trust your comments array over the JSON comment count number where you can) but at least it gets it right.
I was grabbing the feed to get post_id's, then grabbing each post individually to get the correct information. However, just 2 days ago I had some really funny stuff going on where the same facebook post request in iOS would return 2 of the 3 comments, the Chrome browser returned 1 comment (the latest one) and the request in Firefox returned the other 2 comments but not the newest one. Didn't matter if I was logged in or not when using the browser to test the response. This happened for about half the posts with comments.
So I tried using the access token in the URLs on the Facebook Developers site and changing the request to this particular post - returned all the correct information straight away! It got to the point where I even created a new Facebook App to get a new app ID, and a fresh project in XCode to eliminate all possibilities - didn't make a difference.
So thanks to this thread I tried the {post_id}/comments GET, and it works correctly. I've done the same thing for likes to eliminate that potentially breaking further down the line as well!
The Graph API works in mysterious ways and there's a countless number of bugs actually open, but to make it simple you'll need to pass a valid access_token to retrieve all comments from facebook.
Meaning https://graph.facebook.com/page_id/feed?access_token=blah
The API will return a JSON with links to use the pagination. You can use them to browse through or retrieve directly a larger amount of data:
https://graph.facebook.com/page_id/feed?access_token=blah&limit=1000
Note that using a limit higher than 1000 will lead to bugs and possible invalid data... that's a known bug. There are also bugs in the pagination logic that might or might not be fixed as of 2011... you'll have to check.
The comments count and actual count is also buggy and might be off when working on large pages (seen it happen on pages with more than 5k comments per post). There are also some problems with getting the count itself...
Sorry if I can't help you more than that, but the graph API is still a bit of a mess and counts a fairly high number of bugs. You'll have to try and see if it works as explained in the documentation. But definitively add an access token, it can't hurt and you'll most likely get the data you want... unless you run into a bug.
Furthermore, the number of comments is sometimes different in /feed then in /post_id/comments
For instance :
graph.facebook.com/146154582080623/feed returns 1 comment with a count of 3
and
graph.facebook.com/146154582080623_184735008222580/comments (which is a post of the previous page) returns 2 comments
So I'm wondering if privacy is the problem or not.

How on earth does Google Reader parse RSS?

I'm pulling hair out, i might pull a tooth out next, thats how frustrated i am.
I have deleted (for the purpose of proving a point) ALL my RSS files in my wordpress site
http://baked-beans.tv
No matter what i edit, Google Reader reads what it wants, ie: the posts, and all it's content!
So how on earth am I supposed to edit the content which most of my RSS subscribers will view (since Google Reader is very popular)
If you look here: http://baked-beans.tv/feed/
There is NO content!
And yet if I add this URL to Google reader, it generates full posts in the feed.
Furthermore!
If I edit say... wp-includes/feed-rss2.php I can see those changes within the RSS parser of Safari, Firefox, etc, but again, Google just shows the same thing, the entire post.
This really isnt on. If you go to Google Reader, and click on "Show Details" it says "Feed URL: http://baked-beans.tv/feed/" Which is just a total lie.
I really need to control how people see posts. The posts contain hefty video and a lot of images, and it parses the post in a really unattractive way.
Thanks in advance,
Marc
I'm pretty sure Google is using a cached result because your feed is completely empty (which is invalid RSS, which is probably interpreted as an error condition, like the feed being down).
Try showing a feed that is valid, but empty. That should get Google to pick up the change sooner or later.
If you'd like to edit the contents of posts that were already crawled by Reader, you'll need to republish them with the same GUID (if using RSS) or ID (if using Atom). Reader keeps copies of posts indefinitely (so that it can show historical data for feeds), and it keys things off of the ID. If it sees a post with the same ID as one it already has, it'll update the content of its copy with the new crawled content (more details here).

Resources