I have a question about the Youtube Api , i'm using the CodeIgniter YouTube API Library by jimdoescode https://github.com/jimdoescode/CodeIgniter-YouTube-API-Library.
Imagine that you have 2 channels , channel x and y .
I need to run a php code which shows me the most viewed videos per week from this tow channels ONLY in (ASC or DESC) order .
** the channels is not yours - it belongs to any user
Ex :
channel x has:
video1 - 3 watchers
video2 - 1 watchers
video3 - 6 watchers
channel y has:
video4 - 9 watchers
video5 - 3 watchers
video6 - 2 watchers
the php code should result the following
video4
video3
video1
video5
video6
video2
I’ve searched on the Youtube api , Developer's Guide, can you help me with some hints please?
I don't believe you can pull feeds from two channels at once from the youtube API.
You would have to pull two feeds and merge the data in PHP and then sort.
As you say you are using the CI Youtube API Library, you would need to add the parameter
orderby with a value of viewCount to the function you are using to pull the feed.
For example, if you are using getUserUploads() you would want something like:
$resultX = $this->youtube->getUserUploads('channelX',array('orderby'=>'viewCount'));
$resultY = $this->youtube->getUserUploads('channelY',array('orderby'=>'viewCount'));
How you parse the XML response results and convert them to an array for sorting I will leave up to you.
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads?orderby=viewCount&max-results=5
Replace USERNAME with the username of the user you want to check for.
Replace 5 with the videos you want to be sorted (shown).
Related
I would like to find public users on Twitter that have 0 followers. I was thinking of using https://developer.twitter.com/en/docs/accounts-and-users/follow-search-get-users/api-reference/get-users-search, but this doesn't have a way to filter by number of followers. Are there any simple alternatives? (Otherwise, I might have to resort to using a graph/search based approach starting from a random point)
Well you didn't specify what library you are using to interact with the Twitter API. But regardless of which technology you're using, the underlying concept is the same. I will use tweepy library in python for my example.
Start by getting the public users using this. The return type is a list of user objects. The user object has several attributes which you can learn about here. For now we are interested in the followers_count attribute. Simply loop through the objects returned and check where the value of this attribute is 0.
Here's how the implementation would look like in python using tweepy library;
search_query = 'your search query here'
get_users = api.search_users(q = search_query)#Returns a list of user objects
for users in get_users:
if users.followers_count ==0:
#Do stuff if the user has 0 followers
Bird SQL by Perplexity AI allows you to do this simply: https://www.perplexity.ai/sql
Query: Users with 0 followers, and 0 following, with at least 5 tweets
SELECT user_url, full_name, followers_count, following_count, tweet_count
FROM users
WHERE (followers_count = 0)
AND (following_count = 0)
AND (tweet_count >= 5)
ORDER BY tweet_count DESC
LIMIT 10
I have problem with Enhanced E-commerce Measurement Protocol ( docs ). Sometimes client is buying about 100 different products in one transaction. It exceedes the size of 8192 bytes ( reference ) and request doesn't go through.
I tried to split that into small packs with:
transaction details + one item with index 1 (all items as pr1id)
I tried also to split that with increment index:
transaction details + one item with incrementing index (for ex. first I send transaction + pr1id, then transaction + pr2id etc)
I always end up with only one item in google analytics. Is there any way to split it in working and correct way? I couldn't find solution in google or docs.
I'm trying to pull data from Twitter over a month or so for a project. There are <10000 tweets over this time period with this hashtag, but I'm only seeming to get all the tweets from the current day. I got 68 yesterday, and 80 today; both were timestamped with the current day.
api = tweepy.API(auth)
igsjc_tweets = api.search(q="#igsjc", since='2014-12-31', count=100000)
ipdb> len(igsjc_tweets)
80
I know for certain there should be more than 80 tweets. I've heard that Twitter rate-limits to 1500 tweets at a time, but does it also rate-limit to a certain day? Note that I've also tried the Cursor approach with
igsjc_tweets = tweepy.Cursor(api.search, q="#igsjc", since='2015-12-31', count=10000)
This also only gets me 80 tweets. Any tips or suggestions on how to get the full data would be appreciated.
Here's the official tweepy tutorial on Cursor. Note: you need to iterate through the Cursor, shown below. Also, there is a max count that you can pass .items(), so it's probably a good idea to pull month-by-month or something similar and probably a good idea to sleep in between calls. HTH!
igsjc_tweets_jan = [tweet for tweet in tweepy.Cursor(
api.search, q="#igsjc", since='2016-01-01', until='2016-01-31').items(1000)]
First, tweepy cannot bring too old data using its search API
I don't know the exact limitation but maybe month or two back only.
anyway,
you can use this piece of code to get tweets.
i run it in order to get tweets from last few days and it works for me.
notice that you can refine it and add geocode information - i left an example commented out for you
flag = True
last_id = None
while (flag):
flag = False
for status in tweepy.Cursor(api.search,
#q='geocode:"37.781157,-122.398720,1mi" since:'+since+' until:'+until+' include:retweets',
q="#igsjc",
since='2015-12-31',
max_id=last_id,
result_type='recent',
include_entities=True,
monitor_rate_limit=False,
wait_on_rate_limit=False).items(300):
tweet = status._json
print(Tweet)
flag = True # there still some more data to collect
last_id = status.id # for next time
Good luck
I am trying to use PlaylistItems: list (method) in my java code.
The code itself is irrelevant here as the problem can be replicated with use of Youtube API examples here
https://developers.google.com/youtube/v3/docs/playlistItems/list
The problem:
While video with specific id exists in the playList,
when using list query on playlistlist filtered by video id and maxResults < of the vieo's sequence number, the result returned is an empty list instead of list with that specific video.
For example:
https://developers.google.com/youtube/v3/docs/playlistItems/list
part = snippet,id
playlistId = PL6894BC5B5D452193
videoId = xEsC1tw-pOw
maxResults = 5 (default value)
(the 11th video in litst)
Result is empty list.
But if i search for
part = snippet,id
playlistId = PL6894BC5B5D452193
videoId = m1V1SjMD1lo
video is found
(the 1st video in list)
As far as I can tell the reason is the value of
maxResults parameter.
However in a real world scenario - my list contains more that 100 items and the max allowed value for maxResults is 50.
So is there a way to find the correct video using list method?
Is this a bug or am I missing something?
Okay, lets say I have a YouTube playlist with 500 items in them. YouTube's PlaylistItems end-point only allows you to retrieve 50 items at a time:
https://developers.google.com/youtube/v3/docs/playlistItems/list
After 50 items, it gives you a nextPageToken which you can use to specify in your query to get the next page. Doing this, you could iterate through the entire playlist to get all 500 items in 10 queries.
However, what if I only wanted to get the last page? Page 10?
In YouTube's V2 API, you could have told it to start the index at position 451, and then it would give you the results for 451-500. This doesn't seem to be an option in their V3 API. Now, it seems if I wanted to get just page 10, I would have to iterate through the entire playlist once again, throw out the first 9 pages, and then just take the 10th page.
This seems like a HUGE waste of resources and the cURL operations alone could be a killer.
So is it possible to set the starting index in the V3 API like in the V2 API?
You can still use a start index but you have to generate the corresponding page token yourself.
As far as you can tell from observation, page tokens are basically a byte sequence encoded in base64, with the first byte always being 8, and the last two being 16, 0. We generate tokens somewhat like this (using python 3):
i = 451
k = i // 128
i -= 128 * (k - 1)
b = [8, index]
if k > 1 or i > 127: b += [k]
b += [16, 0]
t = base64.b64encode(bytes(f)).decode('utf8').strip('=')
The hindmost operation removes the trailing '=' characters that are used to fill incomplete blocks in base64. The result ('CMMDEAA') is your page token.