youtube.com/get_video_info returning 404 - youtube

In my downloader I used www.youtube.com/get_video_info to get things like the title, views, description, etc. About 1 month ago that didn't work anymore and returned 404. I then found out you just have to add a few arguments, with the url beeing then https://www.youtube.com/get_video_info?html5=1&c=TVHTML5&cver=6.20180913&video_id=<video id>.
Now that doesn't work again. Does anyone know what the new URL is? I already tried to change the cver value as it seems to be a version of the API or something.

Related

readMask value to get the display names form new GMB API

I am running this line of code:
location_list = self.service_mbbi_v1.accounts().locations().list(parent=account_name,readMask='name').execute()
And I get the list of the location IDs but I don't manage to get the Location display name, the display name, not just the ID. I wrote google and told them about my issue and they told me they are looking into it but it has been 2 weeks and no response yet. So just wanted o see if someone else had the same issue and if they fpund a solution or a workaround.
The documentation provides only an example, not a list of possible values:
https://developers.google.com/my-business/reference/businessinformation/rest/v1/accounts.locations/list
And even when I try the example value of the readMask flag I get an error:
[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'read_mask', 'description': 'Invalid field mask provided'}]}]">
I found this similar question:
Google Business Profile API readMask
The example they provide work for me but I still can't get the display name value.
I thought of using the google place ID I get from the metadata response and see if I can use another API to find the name but it feels they should be a proper 'readMask' string vale for the display name here and it is not 'displayName'..
Has anyone a hint of what can I do?
Thanks a lot
If you are trying to get the business name data, your readMask should be 'title'.

GoogleSheet: IMPORTXML error, resource at url not found?

What you will see from images below is that A1 is filled with random number which generated from the script. The number will change randomly every-time cursor is moved, it's used in method for "forcing update the XML data" in Google Sheets.
as we can see from the 1st picture, the IMPORTXML worked like charm, using =IMPORTXML("Link" &A1(which is the random number, that is needed to update the data), "//target content") recipe
Well, it worked out for the 1st link, but not really for the second one, in the 1st image, B2 is using the last link, and it shows 1736.5 as the value, that is showing fine without using &A1 code
After adding &A1 to the formula, it gives error #N/A and Resource at url not found as the error detail.
I already tried to use another cell with calculated numbers(more than A1 or less than), still gives me that error.
Solution
If you look closely to the second URL you will notice it finishes with an = sign. In URLs this symbol is used to express key values pairs. Using your refresh trick, in this case, you are specifying to the server to look for a resource that actually doesn't exist. Hence the IMPORTXML error. Just put the generated URL in the browser to see the result.
Try to put another random parameter in the URL that will cause to refresh the page without causing a 404 HTTP error.
For example:
https://www.cnbc.com/quotes/?symbol=XAU=&x=0
Won't cause any error and will give the desired result.

Empty response when startindex >= 100

After a lot of debugging, it finally occured to me that seemingly Youtube is only issueing the first 100 comments when using the v2 YouTube-API for getting comments. I finally tried using:
curl -Lk -X GET "http://gdata.youtube.com/feeds/api/videos/MShbP3OpASA/comments?alt=json&start-index=100&max-results=50"
And all I get is a response without an entry parameter. That is to say, I do not receive an error response or something like that - I get a perfectly good response, but without the entry parameter.
Digging a little deeper, in my response the value for openSearch$totalResults is 100, so in accordance to this resource this seems to be the expected result (although it tells about some kind of error message which I don't get?).
But here comes the kicker: When I use
curl -Lk -X GET "http://gdata.youtube.com/feeds/api/videos/MShbP3OpASA/comments?alt=json&start-index=1&max-results=50&orderby=published"
openSearch$totalResults equals 3141, the actual count of the comments.
Now here is my question: Since the v2 API is officially been deprecated about a week ago, is it possible that Google just set up a limit on the comments? So only the first 100 comments are accessible? Since the v3 API does not allow for comment retrieval, that would be a pretty bummer for me.
Does anyone have any ideas?
I've figured out how to retrieve all the comments using the navigation links embedded in the json response.
Suppose you retrieve the first using a link like (python here, but you get the point):
r'https://gdata.youtube.com/feeds/api/videos/' + aVideoID + r'/comments?alt=json&start-index=1&max-results=50&prettyprint=true&orderby=published'
Embedded in the json under "feed" (and before the comments) will be a four element array called "link". The fourth element will be called "rel": "next" and under "href" there will be a link you can use to get the next 50 comments. The link will look something like:
https://gdata.youtube.com/feeds/api/videos/fH0cEP0mvlU/comments?alt=json&orderby=published&alt=json&start-token=EgkI2NqyoZDRvgIosK%2FPosPRvgIw653cmsXRvgI4AUAC&max-results=50&orderby=published
for an original URL of:
https://gdata.youtube.com/feeds/api/videos/fH0cEP0mvlU/comments?alt=json&start-index=1&max-results=50&prettyprint=true&orderby=published
If you follow the next link it will return similar json to the original link, with another 50 comments. Continue this process over and over until you get all the comments (in my code I check for both the absence of this item in the json or zero comments in the json to determine when to stop).
You need the "&orderby=published" in the original URL because otherwise the "next" links eventually grow to be too large and cause an error (something in the token the API uses to track which comments you've seen in the default orderby takes a lot of space). Something about the published orderby keeps the "start-token" small, whereas after about 500 comments with the default orderby you will start getting 414 Request URI too long errors.
Hope this helps.

sfGuardPlugin 4.0.2 breaks with sfPropelORMPlugin

A recent pull request requires 'isCrossRef: true' for the many to many list widgets to be generated in forms. Pull Request: https://github.com/propelorm/sfPropelORMPlugin/pull/90
The default forms will throw a fatal error when they try to set the labels for these list that aren't there anymore in the base classes.
Posting this up on SO in case someone else runs into this problem, because it took me a while to figure out.
this is fixed since a week, see: https://github.com/propelorm/sfPropelORMPlugin/pull/136

Twitter Error Could not post Tweet

What could be this Error?
Could not post Tweet. Error: 403 Reason: Status is a duplicate.
actually this is a edited message .
i get error code as 403 and Reason as Status is a duplicate.
Twitter checks messages if they are duplicates of the previous and does not accept them a second time.
So for testing you need to generate new messages (=content) each time.
This is documented somewhere at Twitter, but you can also read about on other sites.
The status is a duplicate, probably running your script twice without changing the status message.
Delete your last status update via Twitter web and run the script again. Or include date('r') or md5(mt_rand()) with your status message to generate a different one each time the script is run.
I also had encountered the same error. what the twitter site says is that they check the messages tweeted and discard (refuse) them if they are same. Discussion here says to use different texts each time you make a tweet. Else use a different account for tweeting.
import time, os, random, hashlib, datetime
gettime = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
random_data = os.urandom(128)
hash = hashlib.md5(gettime).hexdigest()[:8]
twitterpost = "foo bar %s" % hash
api.update_status(status=twitterpost)

Resources