I've been poking around at the YouTube live chat API to render out a custom chat feed, and was wondering how I can show membership/sponsorship badges next to users like the YouTube site itself does?
Looking at a response from the API, I can see that YouTube does tell me the user is a member/sponsor, but it doesn't include at what level/duration nor what badge image should be shown:
{
"kind": "youtube#liveChatMessage",
"etag": "MHpDf4piJnYR2X3lP-7mwBavfWM",
"id": "LCC.CjgKDQoLd1VwYUIzYTdkVW8qJwoYVUNEWExPVjNTMEdUd21EOFY4R1A2dzlREgt3VXBhQjNhN2RVbxI7ChpDSVRodDQzS292VUNGZVV0clFZZHNJRUwzZxIdQ1B1VHJiYV9vdlVDRllhRGdnb2RaUE1LanctMjY",
"snippet": {
"type": "textMessageEvent",
"liveChatId": "Cg0KC3dVcGFCM2E3ZFVvKicKGFVDRFhMT1YzUzBHVHdtRDhWOEdQNnc5URILd1VwYUIzYTdkVW8",
"authorChannelId": "UCYC1zf9Dznp-xpe9rwEopLQ",
"publishedAt": "2022-01-08T16:31:12.317Z",
"hasDisplayContent": true,
"displayMessage": "Instead of waiting 30 seconds you had to spam facecam now you get a 5 minute timeout",
"textMessageDetails": {
"messageText": "Instead of waiting 30 seconds you had to spam facecam now you get a 5 minute timeout"
}
},
"authorDetails": {
"channelId": "UCYC1zf9Dznp-xpe9rwEopLQ",
"channelUrl": "http://www.youtube.com/channel/UCYC1zf9Dznp-xpe9rwEopLQ",
"displayName": "Cody Kerley",
"profileImageUrl": "https://yt3.ggpht.com/ytc/AKedOLQFiwv-x6ukfTOh7pD7WlCe7Ss1AB5wH7QAF53uiQ=s88-c-k-c0x00ffffff-no-rj",
"isVerified": false,
"isChatOwner": false,
"isChatSponsor": true,
"isChatModerator": true
}
}
But if I look at how this message was shown in the YouTube chat itself, the user has the correct membership badge for their level/duration, specific to this channel, and the tooltip also shows you the level/duration of the membership/sponsorship:
How can I get this information from the API for each chat message so that I can render the badge correctly myself?
Cheers.
As you said there doesn't seem to be any official YouTube Data API v3 endpoint providing membership badges for YouTube live chat messages.
However I reverse-engineered YouTube live chat messages and here is the solution:
Get a continuation token starting with 0ofMyAO (there seems to be 2 that both work) by executing (don't forget to change VIDEO_ID with your YouTube live video id):
curl -s 'https://www.youtube.com/live_chat?v=VIDEO_ID' -H 'User-Agent: Firefox/99'
Use this continuation token to fetch all pieces of information about current YouTube live chat messages by executing (don't forget to change CONTINUATION_TOKEN with the continuation token you grab at step 1., note: don't care about the key it's not a YouTube Data API v3 key):
curl -s 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat?key=AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8' -H 'Content-Type: application/json' --data-raw '{"context":{"client":{"clientName":"WEB","clientVersion":"2.9999099"}},"continuation":"CONTINUATION_TOKEN"}'
Likewise you'll get all pieces of information about current YouTube live chat messages since the moment you grab the continuation token at step 1. however continuation token seems to expire every 5 minutes so grab a new one from the response of step 2. or by doing step 1. once again.
Note 0: during the 5 minutes window, you can execute step 2. as many time as you want to get messages in real time
Note 1: I recommend you to change the continuation token every 4 minutes in order not to miss any message
I let you understand the JSON response that contain the pieces of information you are looking for
Note: at step 1. you retrieve recent message sent before your request but it's in HTML format and not JSON this time
Related
I'm developing a mobile app which is able to record GPS data of an indoor ride like the athlete would circle around a velodrome. It is relatively easy to calculate the GPS points based on the speed measurements the spinning bike provides (compared to an arbitrary GPS route).
The app is uploading my recorded activities in GPX format (gpx.gz to be more precise to speed up things) using the Strava API. The app obtains OAuth token with "activity:write" scope. The Upload POST returns 201 and the upload finishes as well shortly with 200 success code. However after I look at my Strava user dashboard no activity shows up. When I try to view the said activities through Strava's Swagger API play ground it tells me "Record Not Found".
curl -X GET "https://www.strava.com/api/v3/activities/4381960409" -H "accept: application/json" -H "authorization: Bearer zzzzzzzzzzzzzzzzzzzzzzzzzzzz"
https://www.strava.com/api/v3/activities/4381960409
{
"message": "Record Not Found",
"errors": [
{
"resource": "Activity",
"field": "id",
"code": "invalid"
}
]
}
Example activity ids which "got lost in the ether": 4381670165, 4381744693, 4381960409.
My problem is that I don't have any debug information about what could be wrong. I receive success codes, but then the activities just never really materialize. Furthermore I cannot check the Upload's status through their Swagger, because the OAuth token there only has read privileges.
Since I'm generating the GPX files I tested them by manually uploading them. The first one as a Virtual Ride (https://www.strava.com/activities/4094942758) and the second one as a Ride tagged as Indoor cycling (https://www.strava.com/activities/4094974788). Neither of them shows any GPS data whatsoever. However the file contains the data.
So maybe the GPX files have some problem? Here are the two: https://drive.google.com/drive/folders/1dkUvrLxW2r3tvQqvoqAkOB9998N9uLn7?usp=sharing
The app is written in Flutter and uses my derivatives of strava_flutter and rw_tcx.
final stravaService = Get.find<StravaService>();
await stravaService.login();
final records = await _database.recordDao.findAllActivityRecords(activity.id);
final statusCode = await stravaService.upload(activity, records);
if (statusCode == statusOk) {
activity.uploaded = true;
await _database.activityDao.updateActivity(activity);
}
As I mention, it completes successfully, both the Upload POST and then it gets back 200, this is in the guts of strava_flutter.
I made multiple mistakes:
I was dealing with multiple file formats (FIT, GPX, TCX) and the actual file I uploaded was TCX. Kudos to Strava developers the system was able to gracefully swallow that and extract the info from the TCX, didn't return an error code. That's smooth.
The main reason why the GPS didn't show: I swapped the lat-lon coordinates. Unfortunately that can be confusing as well, especially if someone is tired. https://macwright.com/lonlat/ After some additional corrections the GPS now shows: https://www.strava.com/activities/4104607928
I am using outlook api v2.0 rest api to perform crud operations against calendars and events and have started hitting a rate limit issue.
This one for example is hitting the calendarview endpoint:
GET https://outlook.office.com/api/v2.0/me/calendars/{CALENDAR_ID}/calendarview
RESPONSE HEADERS
Rate-Limit-Limit=10000
Rate-Limit-Remaining=9982
Rate-Limit-Reset=2019-10-23T15:27:11.409Z
Retry-After=1
RateLimit-Exceeded=MailboxConcurrency
RateLimit-Scope=Mailbox
Transfer-Encoding=chunked
X-Proxy-BackendServerStatus=429
X-Powered-By=ASP.NET
X-RUM-Validated=1
RESPONSE BODY
{
"error": {
"code": "ApplicationThrottled",
"message": "Application is over its MailboxConcurrency limit."
}
}
At first I thought it was the 10.000 requests per 10 minutes period but it seems I am hitting a different one.
The error is showing that you've hit the mailboxconcurency limit. There is a limit of 4 concurrent requests as per the documentation.
Is there any reason you are using this API rather than Microsoft Graph also?
https://learn.microsoft.com/en-us/graph/throttling#outlook-service-limits
I'm consuming the PubNub Twitter stream and getting data on the console successfully. What I'm having trouble with, though, is the number of results. This is the code:
PUBNUB({
subscribe_key: 'sub-c-78806dd4-42a6-11e4-aed8-02ee2ddab7fe'
}).subscribe({
channel : 'pubnub-twitter',
callback: processData
});
function processData(data) {
if(data.text.toLowerCase().indexOf("#brexit")>-1)
{
console.log(data.text);
}
}
I'm getting results on my console for this too, but they're really slow (I had to wait about seven minutes to get two tweets, while on the Twitter app, there are are at least 3-5 tweets with this hashtag every minute).
Is there a faster/more efficient way to filter the stream?
Are you using your own Twitter connection or the stream from this page?
https://www.pubnub.com/developers/realtime-data-streams/twitter-stream/
This stream represents only a small fraction of the entire Twitter firehose, so it is very possible that you'd only get a hashtag match every few minutes.
If you want a specific hashtag you should create your own Twitter to PubNub stream. I wrote this blog on how to do it with just a little bit of code.
https://www.pubnub.com/blog/2016-04-14-connect-twitter-and-pubnub-in-one-line-of-code-with-nodejs-streams/
I'm using YouTube Reporting Api to get my CMS account's datas as bulk reports.
I am using Api Explorer with my CMS user account. I have enabled YouTube Reporting API from console but whenever I'm trying the following request, I get 401 error. I believe that I'm missing something or doing something wrong but I couldn't find it. What is the exact reason of this issue?
Mr. Ibrahim Ulukaya , you are the one who created the PHP sample codes for YouTube Reporting API. How can I solve this issue?
Thank you! :)
This is my request;
POST https://youtubereporting.googleapis.com/v1/jobs?onBehalfOfContentOwner=contentOwner%3D%3DContent_Owner_Name&fields=id%2CreportTypeId&key={YOUR_API_KEY}
{
"reportTypeId": "content_owner_ad_performance_a1"
}
This is the Response;
401 OK
Show headers -
{
"error": {
"code": 401,
"message": "The request does not have valid authentication credentials.",
"status": "UNAUTHENTICATED"
}
}
Edit
When I don't add Content Owner name, I get 400 Error..
Here's my request;
POST https://youtubereporting.googleapis.com/v1/jobs?fields=name%2CreportTypeId&key={YOUR_API_KEY}
{
}
Here's the response;
400 OK
Show headers -
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
I solved this problem...
401 error happens because I was thinking that I should add my Content Owner Name at OnBehalfOfContentOwner section. According to YouTube documentation, It's supposed to be my Content Owner ID. (The content owner's external ID on which behalf the user is acting on.) When I added my Content Owner ID, the request was fine..
400 Error happens because I left the OnBehalfOfContentOwner section blank and didn't write anything. According to YouTube documentation, f the request does not specify a value for this parameter, the API server assumes that the request is being made for the user's own channel.. If it is acting as your own channel (not content owner), you cannot retrieve anything from Content Owner Reports. You can only choose something from Channel Reports. If you try retrieving data from Content Owner Reports while acting as your own channel, the request will be invalid because it cannot see the report in the Channel Report list.
The most important things are;
If you are trying to retrive Content Owner reports, you should use your Content Owner ID , not Content Owner name on OnBehalfOfContentOwner section.
You should choose a correct report id from the list which you are acting as...
I've read the Twitter REST API docs, I know that it says you can fetch 200 at a time to a max of 800. However... I can't. I'm pulling 200, using the last tweet as max_id and then sending another request but I only receive the last tweet from the first request, not the remaining from my supposed 800 limit.
So I did a little research and I found that when I was sending more direct messages from other accounts my other direct messages were disappearing (i.e, if I had 200 received messages from an account called "sup," and I sent 5 messages from an account called "foo," "sup" would only show 195 direct messages and "foo" would show 5. Those 5 messages would disappear from "sup" in both the twitter DM window, as well as from the API calls.
I'm using Twython to do this, but I don't believe that switching back to requests would change anything, as I can visibly see the messages disappearing from the chat log. Does that mean that Twitter only stores 200 total DM's? Or am I doing something completely wrong.
This is the code I was using to pull for direct messages. Keep in mind that I still don't know how to explain DM's disappearing in the twitter DM console.
test_m = twitter.get_direct_messages(count=200)
i = 0
for x in test_m:
print 'dm number = ' + str(i) + '| dm id= '+ str(x['id']) + ' |text= ' + x['text']
i += 1
m_id = test_m[-1]['id']
test_m_2 = twitter.get_direct_messages(count=200, max_id=m_id)
This code will return test_m as an array of 200 items, and test_m_2 as an array of 1 item, containing the last element of test_m.
Edit: Well, no response yet but I figured I should add that this method successfully returns more than 200 messages for the other api calls I've made (user timeline, mentions timeline, retweets). From my testing I have to assume that only 200 incoming messages are stored by twitter throughout all DM interactions. If I'm wrong, let me know!
Brian,
Twitter stores more than the last 200 messages, if you were to delete 1 of the Direct messages using destroy_direct_message, then you can access 1 addition old direct Message.
Deleting 100 old Direct Messages will give you access an additional 100 messages etc.
I neither make max_id nor page work either. not sure if the bug it in Twython or Twitter ;-(
JJ
Currently, the API stands you can get up to the latest 3200 tweets of an account but only the 200 latest received direct messages (direct_messages endpoint) from a conversation or the 800 latest sent direct messages (direct_messages/sent endpoint).
To answer your question, I do not think there is a limitation of the number of direct messages "stored" by Twitter. Recently, I have been able to retrieve a complete conversation with more than 17000 direct messages (and all the uploaded media) using this tool that I have created for this purpose.