Developer API for Walmart Grocery - walmart-api

There are Developer API's and SDK's for Walmart, but these appear to be designed for general items like TV's, furniture, etc... does anybody happen to know if there is an SDK or API for Walmart Grocery? My use case is to populate a Walmart Grocery shopping cart programmatically for a given store.

For the walmart grocery api, you can use cURL to get a list of grocery items. In the example I provide below I will be using PHP (Guzzle/HTTP)
Search For Eggs (PHP)
$client = new Client(['base_uri' => 'https://grocery.walmart.com/v4/api/']);
$method = 'GET';
$path = 'products/search';
$query = [
'query' =>
[
'storeId' => '1855', //store location
'count' => 50,
'page' => 1,
'offset' => 0,
'query' => 'Egg Dozen'
]
];
$req = $client->request($method, $path, $query);
$data = json_decode($req->getBody()->getContents());
$products = $data->products;
print_r($products);
This will list off 50-60 grocery items based off of your search query.
Search For Eggs in cURL
curl -X GET \
'https://grocery.walmart.com/v4/api/products/search?storeId=1855&count=1&page=1&offset=0&query=eggs' \
-H 'Postman-Token: 1723fea5-657d-4c54-b245-027d41b135aa' \
-H 'cache-control: no-cache'
Not sure if that answered your question but I hope it is a start.

Related

How can I retrieve the YouTube comments that are held for review

I am trying to retrieve all of the comments that are held for review. I am able to get the top-level comments by using the commentThreads().list() function, but need to get the replies to the top-level comments that are held for review. I used the comments().list() method to get the replies to the top-level comments. I am able to get the replies, but all of them were published comments. None of the comments that are held for review were retrieved, which is puzzling. Is this how it's supposed to be? I do not need the published comments, I just need the ones that are held for review. I tried to request the comments that are held for review, but kept getting the following error:
mod = item['snippet']['moderationStatus']
KeyError: 'moderationStatus'
Not sure what to do to get relies that are held for review.
def get_comments(service, parent_id, threadId, comments):
results = service.comments().list(
part = "snippet, id",
parentId = parent_id,
textFormat = "plainText"
).execute()
for item in results['items']:
cid = item['id']
text = item['snippet']['textDisplay']
mod = item['snippet']['moderationStatus']
# ^ The line above generates an error: KeyError: 'moderationStatus'
# If I delete the "mod =..." and the "if mod !=..." lines, I get all
# of the replies to the top-level comment (parentId) that have been
# published but none of the ones that are held for review.
if mod != "heldForReview":
comments.append([text, cid])
return comments
There is a query parameter moderationStatus you can use to get comments held for review. I think you just need to set moderationStatus= heldForReview
See the doc here
https://developers.google.com/youtube/v3/docs/commentThreads/
And your query would look like this
curl \
'https://www.googleapis.com/youtube/v3/commentThreads?part=snippet%2Creplies&moderationStatus=heldForReview&videoId=[VIDEO_ID]&key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--compressed

YouTube API to fetch all videos on a channel

We need a video list by channel name of YouTube (using the API).
We can get a channel list (only channel name) by using the below API:
https://gdata.youtube.com/feeds/api/channels?v=2&q=tendulkar
Below is a direct link of channels
https://www.youtube.com/channel/UCqAEtEr0A0Eo2IVcuWBfB9g
Or
WWW.YouTube.com/channel/HC-8jgBP-4rlI
Now, we need videos of channel >> UCqAEtEr0A0Eo2IVcuWBfB9g or HC-8jgBP-4rlI.
We tried
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&User=UC7Xayrf2k0NZiz3S04WuDNQ
https://gdata.youtube.com/feeds/api/videos?v=2&uploader=partner&q=UC7Xayrf2k0NZiz3S04WuDNQ
But, it does not help.
We need all the videos posted on the channel. Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
You need to look at the YouTube Data API. You will find there documentation about how the API can be accessed. You can also find client libraries.
You could also make the requests yourself. Here is an example URL that retrieves the latest videos from a channel:
https://www.googleapis.com/youtube/v3/search?key={your_key_here}&channelId={channel_id_here}&part=snippet,id&order=date&maxResults=20
After that you will receive a JSON with video ids and details, and you can construct your video URL like this:
http://www.youtube.com/watch?v={video_id_here}
First, you need to get the ID of the playlist that represents the uploads from the user/channel:
https://developers.google.com/youtube/v3/docs/channels/list#try-it
You can specify the username with the forUsername={username} param, or specify mine=true to get your own (you need to authenticate first). Include part=contentDetails to see the playlists.
GET https://www.googleapis.com/youtube/v3/channels?part=contentDetails&forUsername=jambrose42&key={YOUR_API_KEY}
In the result "relatedPlaylists" will include "likes" and "uploads" playlists. Grab that "upload" playlist ID.
Also note the upload playlist id is your channelId prefixed with UU instead of UC.
Next, get a list of videos in that playlist:
https://developers.google.com/youtube/v3/docs/playlistItems/list#try-it
Just drop in the playlistId!
GET https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId=UUpRmvjdu3ixew5ahydZ67uA&key={YOUR_API_KEY}
Here is a video from Google Developers showing how to list all videos in a channel in v3 of the YouTube API.
There are two steps:
Query Channels to get the "uploads" Id. eg https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails
Use this "uploads" Id to query PlaylistItems to get the list of videos. eg https://www.googleapis.com/youtube/v3/playlistItems?playlistId={"uploads" Id}&key={API key}&part=snippet&maxResults=50
To get channels list :
Get Channels list by forUserName:
https://www.googleapis.com/youtube/v3/channels?part=snippet,contentDetails,statistics&forUsername=Apple&key=
Get channels list by channel id:
https://www.googleapis.com/youtube/v3/channels/?part=snippet,contentDetails,statistics&id=UCE_M8A5yxnLfW0KghEeajjw&key=
Get Channel sections:
https://www.googleapis.com/youtube/v3/channelSections?part=snippet,contentDetails&channelId=UCE_M8A5yxnLfW0KghEeajjw&key=
To get Playlists :
Get Playlists by Channel ID:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=
Get Playlists by Channel ID with pageToken:
https://www.googleapis.com/youtube/v3/playlists?part=snippet,contentDetails&channelId=UCq-Fj5jknLsUf-MWSy4_brA&maxResults=50&key=&pageToken=CDIQAA
To get PlaylistItems :
Get PlaylistItems list by PlayListId:
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet,contentDetails&maxResults=25&playlistId=PLHFlHpPjgk70Yv3kxQvkDEO5n5tMQia5I&key=
To get videos :
Get videos list by video id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw&key=
Get videos list by multiple videos id:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=YxLCwfA1cLw,Qgy6LaO3SB0,7yPJXGO2Dcw&key=
Get comments list
Get Comment list by video ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&videoId=el****kQak&key=A**********k
Get Comment list by channel ID:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&channelId=U*****Q&key=AI********k
Get Comment list by allThreadsRelatedToChannelId:
https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&allThreadsRelatedToChannelId=UC*****ntcQ&key=AI*****k
Here all api's are Get approach.
Based on channel id we con't get all videos directly, that's the important point here.
For integration https://developers.google.com/youtube/v3/quickstart/ios?ver=swift
Here is the code that will return all video ids under your channel
<?php
$baseUrl = 'https://www.googleapis.com/youtube/v3/';
// https://developers.google.com/youtube/v3/getting-started
$apiKey = 'API_KEY';
// If you don't know the channel ID see below
$channelId = 'CHANNEL_ID';
$params = [
'id'=> $channelId,
'part'=> 'contentDetails',
'key'=> $apiKey
];
$url = $baseUrl . 'channels?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$playlist = $json['items'][0]['contentDetails']['relatedPlaylists']['uploads'];
$params = [
'part'=> 'snippet',
'playlistId' => $playlist,
'maxResults'=> '50',
'key'=> $apiKey
];
$url = $baseUrl . 'playlistItems?' . http_build_query($params);
$json = json_decode(file_get_contents($url), true);
$videos = [];
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
while(isset($json['nextPageToken'])){
$nextUrl = $url . '&pageToken=' . $json['nextPageToken'];
$json = json_decode(file_get_contents($nextUrl), true);
foreach($json['items'] as $video)
$videos[] = $video['snippet']['resourceId']['videoId'];
}
print_r($videos);
Note: You can get channel id at
https://www.youtube.com/account_advanced after logged in.
Below is a Python alternative that does not require any special packages. By providing the channel id it returns a list of video links for that channel. Please note that you need an API Key for it to work.
import urllib
import json
def get_all_video_in_channel(channel_id):
api_key = YOUR API KEY
base_video_url = 'https://www.youtube.com/watch?v='
base_search_url = 'https://www.googleapis.com/youtube/v3/search?'
first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)
video_links = []
url = first_url
while True:
inp = urllib.urlopen(url)
resp = json.load(inp)
for i in resp['items']:
if i['id']['kind'] == "youtube#video":
video_links.append(base_video_url + i['id']['videoId'])
try:
next_page_token = resp['nextPageToken']
url = first_url + '&pageToken={}'.format(next_page_token)
except:
break
return video_links
Thanks to the references shared here and elsewhere, I've made an online script / tool that one can use to obtain all videos of a channel.
It combines API calls to youtube.channels.list, playlistItems, videos. It uses recursive functions to make the asynchronous callbacks run the next iteration upon getting a valid response.
This also serves to limit the actual number of requests made at a time, hence keeping you safe from violating YouTube API rules. Sharing shortened snippets and then a link to the full code. I got around the 50 max results per call limitation by using the nextPageToken value that comes in the response to fetch the next 50 results and so on.
function getVideos(nextPageToken, vidsDone, params) {
$.getJSON("https://www.googleapis.com/youtube/v3/playlistItems", {
key: params.accessKey,
part: "snippet",
maxResults: 50,
playlistId: params.playlistId,
fields: "items(snippet(publishedAt, resourceId/videoId, title)), nextPageToken",
pageToken: ( nextPageToken || '')
},
function(data) {
// commands to process JSON variable, extract the 50 videos info
if ( vidsDone < params.vidslimit) {
// Recursive: the function is calling itself if
// all videos haven't been loaded yet
getVideos( data.nextPageToken, vidsDone, params);
}
else {
// Closing actions to do once we have listed the videos needed.
}
});
}
This got a basic listing of the videos, including id, title, date of publishing and similar. But to get more detail of each video like view counts and likes, one has to make API calls to videos.
// Looping through an array of video id's
function fetchViddetails(i) {
$.getJSON("https://www.googleapis.com/youtube/v3/videos", {
key: document.getElementById("accesskey").value,
part: "snippet,statistics",
id: vidsList[i]
}, function(data) {
// Commands to process JSON variable, extract the video
// information and push it to a global array
if (i < vidsList.length - 1) {
fetchViddetails(i+1) // Recursive: calls itself if the
// list isn't over.
}
});
See the full code here, and live version here. (Edit: fixed github link)
Edit: Dependencies: JQuery, Papa.parse
Short answer:
Here's a library called scrapetube That can help with that.
pip install scrapetube
import scrapetube
import simplejson as json
videos = scrapetube.get_channel("UC9-y-6csu5WGm29I7JiwpnA")
for video in videos:
print(video['videoId'])
print(video['title']['runs'][0]['text'])
print(video['publishedTimeText']['simpleText'])
print('\r\n')
# DEBUG: print(json.dumps(video))
Long answer:
The module mentioned above was created by me due to a lack of any other solutions. Here's what i tried:
Selenium. It worked but had three big drawbacks: 1. It requires a web browser and driver to be installed. 2. has big CPU and memory requirements. 3. can't handle big channels.
Using youtube-dl. Like this:
import youtube_dl
youtube_dl_options = {
'skip_download': True,
'ignoreerrors': True
}
with youtube_dl.YoutubeDL(youtube_dl_options) as ydl:
videos = ydl.extract_info(f'https://www.youtube.com/channel/{channel_id}/videos')
This also works for small channels, but for bigger ones i would get blocked by youtube for making so many requests in such a short time (because youtube-dl downloads more info for every video in the channel).
So i made the library scrapetube which uses the web API to get all the videos.
Try with like the following. It may help you.
https://gdata.youtube.com/feeds/api/videos?author=cnn&v=2&orderby=updated&alt=jsonc&q=news
Here author as you can specify your channel name and "q" as you can give your search key word.
Since everyone answering this question has problems due to the 500 video limit here's an alternate solution using youtube_dl in Python 3. Also, no API key is needed.
Install youtube_dl: sudo pip3 install youtube-dl
Find out your target channel's channel id. The ID is going to start with UC. Replace the C for Channel with U for Upload (i.e. UU...), this is the upload playlist.
Use the playlist downloader feature from youtube-dl. Ideally you do NOT want to download every video in the playlist which is the default, but only the metadata.
Example (warning -- takes tens of minutes):
import youtube_dl, pickle
# UCVTyTA7-g9nopHeHbeuvpRA is the channel id (1517+ videos)
PLAYLIST_ID = 'UUVTyTA7-g9nopHeHbeuvpRA' # Late Night with Seth Meyers
with youtube_dl.YoutubeDL({'ignoreerrors': True}) as ydl:
playd = ydl.extract_info(PLAYLIST_ID, download=False)
with open('playlist.pickle', 'wb') as f:
pickle.dump(playd, f, pickle.HIGHEST_PROTOCOL)
vids = [vid for vid in playd['entries'] if 'A Closer Look' in vid['title']]
print(sum('Trump' in vid['title'] for vid in vids), '/', len(vids))
Just in three steps:
Subscriptions: list ->
https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&maxResults=50&mine=true&access_token={oauth_token}
Channels: list ->
https://www.googleapis.com/youtube/v3/channels?part=contentDetails&id={channel_id}&key={YOUR_API_KEY}
PlaylistItems: list ->
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId={playlist_id}&key={YOUR_API_KEY}
Recently I had to retrieve all videos from a channel, and according to YouTube developer documentation:
https://developers.google.com/youtube/v3/docs/playlistItems/list
function playlistItemsListByPlaylistId($service, $part, $params) {
$params = array_filter($params);
$response = $service->playlistItems->listPlaylistItems(
$part,
$params
);
print_r($response);
}
playlistItemsListByPlaylistId($service,
'snippet,contentDetails',
array('maxResults' => 25, 'playlistId' => 'id of "uploads" playlist'));
Where $service is your Google_Service_YouTube object.
So you have to fetch information from the channel to retrieve the "uploads" playlist that actually has all the videos uploaded by the channel: https://developers.google.com/youtube/v3/docs/channels/list
If new with this API, I highly recommend to turn the code sample from the default snippet to the full sample.
So the basic code to retrieve all videos from a channel can be:
class YouTube
{
const DEV_KEY = 'YOUR_DEVELOPPER_KEY';
private $client;
private $youtube;
private $lastChannel;
public function __construct()
{
$this->client = new Google_Client();
$this->client->setDeveloperKey(self::DEV_KEY);
$this->youtube = new Google_Service_YouTube($this->client);
$this->lastChannel = false;
}
public function getChannelInfoFromName($channel_name)
{
if ($this->lastChannel && $this->lastChannel['modelData']['items'][0]['snippet']['title'] == $channel_name)
{
return $this->lastChannel;
}
$this->lastChannel = $this->youtube->channels->listChannels('snippet, contentDetails, statistics', array(
'forUsername' => $channel_name,
));
return ($this->lastChannel);
}
public function getVideosFromChannelName($channel_name, $max_result = 5)
{
$this->getChannelInfoFromName($channel_name);
$params = [
'playlistId' => $this->lastChannel['modelData']['items'][0]['contentDetails']['relatedPlaylists']['uploads'],
'maxResults'=> $max_result,
];
return ($this->youtube->playlistItems->listPlaylistItems('snippet,contentDetails', $params));
}
}
$yt = new YouTube();
echo '<pre>' . print_r($yt->getVideosFromChannelName('CHANNEL_NAME'), true) . '</pre>';
Using API version 2, which is deprecated, the URL for uploads (of channel UCqAEtEr0A0Eo2IVcuWBfB9g) is:
https://gdata.youtube.com/feeds/users/UCqAEtEr0A0Eo2IVcuWBfB9g/uploads
There is an API version 3.
From https://stackoverflow.com/a/65440501/2585501:
This method is especially useful if a) the channel has more than 50 videos or if b) desire youtube video ids formatted in a flat txt list:
Obtain a Youtube API v3 key (see https://stackoverflow.com/a/65440324/2585501)
Obtain the Youtube Channel ID of the channel (see https://stackoverflow.com/a/16326307/2585501)
Obtain the Uploads Playlist ID of the channel: https://www.googleapis.com/youtube/v3/channels?id={channel Id}&key={API key}&part=contentDetails (based on https://www.youtube.com/watch?v=RjUlmco7v2M)
Install youtube-dl (e.g. pip3 install --upgrade youtube-dl or sudo apt-get install youtube-dl)
Download the Uploads Playlist using youtube-dl: youtube-dl -j --flat-playlist "https://<yourYoutubePlaylist>" | jq -r '.id' | sed 's_^_https://youtu.be/_' > videoList.txt (see https://superuser.com/questions/1341684/youtube-dl-how-download-only-the-playlist-not-the-files-therein)
Posting long after the original question was asked, but I made a python package that does this using a very simple API. It gets all the videos uploaded to a channel, but I'm not sure about this part (included in the original question):
Videos uploaded to a channel can be from multiple users thus I don't think providing a user parameter would help...
Maybe YouTube changed in the 8 years since this question was posted, but if it didn't, the package I made might not cover this case.
To use the API:
pip3 install -U yt-videos-list # macOS
pip install -U yt-videos-list # Windows
# if that doesn't work, try
python3 -m pip install -U yt-videos-list # macOS
python -m pip install -U yt-videos-list # Windows
Then open up a python interpreter
python3 # macOS
python # Windows
and run the program:
from yt_videos_list import ListCreator
lc = ListCreator()
help(lc) # display API information - shows available parameters and functions
my_url = 'https://www.youtube.com/user/1veritasium'
lc.create_list_for(url=my_url)
Python documentation (will be updated most frequently, so check this page for updates!)
Repository homepage
PyPI page
Sample solution in Python. Help taken from this video: video
Like many other answers, upload id is to be retrieved from the channel id first.
import urllib.request
import json
key = "YOUR_YOUTUBE_API_v3_BROWSER_KEY"
#List of channels : mention if you are pasting channel id or username - "id" or "forUsername"
ytids = [["bbcnews","forUsername"],["UCjq4pjKj9X4W9i7UnYShpVg","id"]]
newstitles = []
for ytid,ytparam in ytids:
urld = "https://www.googleapis.com/youtube/v3/channels?part=contentDetails&"+ytparam+"="+ytid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
uploadsdet = datad['items']
#get upload id from channel id
uploadid = uploadsdet[0]['contentDetails']['relatedPlaylists']['uploads']
#retrieve list
urld = "https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&maxResults=50&playlistId="+uploadid+"&key="+key
with urllib.request.urlopen(urld) as url:
datad = json.loads(url.read())
for data in datad['items']:
ntitle = data['snippet']['title']
nlink = data['contentDetails']['videoId']
newstitles.append([nlink,ntitle])
for link,title in newstitles:
print(link, title)
That's my Python solution, using Google API.
Observations:
Create a .env file to store your API Developer Key, and put it in your .gitignore file
The parameter "forUserName" should be set with the name of the Youtube Channel (username). Alternatively, you can use the channel id, setting the parameter "id", instead of "forUserName".
The object "playlistItem" gives you access to each video. I'm showing only its title but there are many other properties.
import os
import googleapiclient.discovery
from decouple import config
def main():
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
DEVELOPER_KEY = config('API_KEY')
youtube = googleapiclient.discovery.build(
api_service_name, api_version, developerKey = DEVELOPER_KEY)
request = youtube.channels().list(
part="contentDetails",
forUsername="username",
# id="oiwuereru8987",
)
response = request.execute()
for item in response['items']:
playlistId = item['contentDetails']['relatedPlaylists']['uploads']
nextPageToken = ''
while (nextPageToken != None):
playlistResponse = youtube.playlistItems().list(
part='snippet',
playlistId=playlistId,
maxResults=25,
pageToken=nextPageToken
)
playlistResponse = playlistResponse.execute()
print(playlistResponse.keys())
for idx, playlistItem in enumerate(playlistResponse['items']):
print(idx, playlistItem['snippet']['title'])
if 'nextPageToken' in playlistResponse.keys():
nextPageToken = playlistResponse['nextPageToken']
else:
nextPageToken = None
if __name__ == "__main__":
main()
Example for the .env file
API_KEY=<Key_Here>
Using the gapi JavaScript API, you can do this
<script src="https://apis.google.com/js/api.js"></script>
const start = () => {
gapi.client
.init({
apiKey: "your_youtubeApiKey",
discoveryDocs: ["https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest"],
scope: "https://www.googleapis.com/auth/youtube.readonly",
})
.then(() => {
console.log("gapi.client initiated");
})
.then(() =>
gapi.client.youtube.channels.list({
part: "snippet,contentDetails,statistics",
id: "youtube_channelId",
// forUsername: 'Bankless',
})
)
.then(
(res) =>
// get the youtube related playlist id
res.result.items[0].contentDetails.relatedPlaylists.uploads
)
.then((playlistId) =>
gapi.client.youtube.playlistItems.list({
part: "snippet",
playlistId,
maxResults: 50,
})
)
.then((res) =>
// get youtube videos snippets
res.result.items.map((item) => item.snippet)
)
.then((snippets) =>
snippets.map((snippet) => {
const { title, description, resourceId } = snippet;
const { videoId } = resourceId;
return { title, description, videoId };
})
)
.then((videos) => {
console.log(videos);
})
.catch((err) => console.error(err));
};
gapi.load("client", start);
Docs:
https://github.com/google/google-api-javascript-client
https://developers.google.com/youtube/v3/guides/auth/client-side-web-apps#callinganapi
You have to get the channel_id of the video you want to get the data from.
For getting the channel_id using the video_id, you can use the videos:list endpoint of the YouTube Data API - add video_id in Id parameter. example.
Then, with the channel_id, change the second character to "U" :
This modified id is the Uploads playlist of that said YouTube channel.
With this Uploads playlist_id, you can use the Playlistitem:list endpoint of the YouTube Data API to retrieve all the uploaded videos from the channel.
In the part parameter add "id,snippet,contentDetails,status".
and in playlistID add the modified channel ID.
and then execute.

Parameters of Youtube-Channel with php ( Age, Description, Upload-Count... )

ich want to code a site with informations about some youtube-channels. I got the following parameters:
Subscribers:
<?php
$json = file_get_contents("http://gdata.youtube.com/feeds/api/users/*username*?alt=json");
$data = json_decode($json, true);
echo '' . $data['entry']['yt$statistics']['subscriberCount'];
?>
Chanel-Views:
<?php
$json = file_get_contents("http://gdata.youtube.com/feeds/api/users/*username*?alt=json");
$data = json_decode($json, true);
echo '' . $data['entry']['yt$statistics']['viewCount'];
?>
Total-Views:
<?php
$json = file_get_contents("http://gdata.youtube.com/feeds/api/users/*username*?alt=json");
$data = json_decode($json, true);
echo '' . $data['entry']['yt$statistics']['totalUploadViews'];
?>
But how could I get the count of the uploaded Videos. I only want to show the count how many videos were uploaded by this user.
And i want to show the age, description, snippet, the date when he joined youtube.
Could anybody help me?
The number of public videos in a given YouTube channel can be obtained via the openSearch$totalResults->$t property of the uploads feed for that channel. Here's an example for GoogleDevelopers:
http://gdata.youtube.com/feeds/api/users/googledevelopers/uploads?v=2&alt=json
"openSearch$totalResults": {
"$t": 1458
}
Most of that other information you're looking for, regarding the channel creator's personal details, is no longer exposed on YouTube.com or via the API.

Twitter feed protected by default?

Attempting to write a script that'll fetch a couple users' latest tweets. Works great on my own twitter account, but not on the other accounts, which were created very recently (< 7 days).
Upon checking their account settings, they report that "Protect my tweets" is unchecked, which should mean that I can access them publicaly using the twitter API.
Relevant code:
$url = 'http://api.twitter.com/1/statuses/user_timeline.json?user_id=' . $twID . '&count=' . $count . '&trim_user=true';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$return[] = json_decode(curl_exec($ch), true);
Result from Twitter API:
[0] => Array
(
[error] => This method requires authentication.
[request] => /1/statuses/user_timeline.json?user_id=1540067663&count=6&trim_user=true
)
[1] => Array
(
[0] => Array
(
[favorited] => ... // Success -- Output truncated for brevity.
)
[2] => Array
(
[error] => This method requires authentication.
[request] => /1/statuses/user_timeline.json?user_id=1532872753&count=6&trim_user=true
)
)
Are new accounts automatically protected in the Twitter API?
Where is $twID coming from?
I actually had the opposite where the screen name param giving me wonky results so I had to use the userid. You have to love the Twitter API sometimes.
You might find this call useful which gives you interchanging information between the two
http://api.twitter.com/1/users/lookup.xml?screen_name=twitterapi
http://api.twitter.com/1/users/lookup.xml?user_id=6253282
Utilising screen_name instead of user_id resolved the issue to my satisfaction.
The final link format:
$url = 'http://api.twitter.com/1/statuses/user_timeline.json?screen_name=' . $twID . '&count=' . $count;
Possibly this may be a reportable bug with the Twitter API... but we shall see. :)

Replies to a particular tweet, Twitter API

Is there a way in the Twitter API to get the replies to a particular tweet? Thanks
Here is the procedure to get the replies for a tweets
when you fetch the tweet store the tweetId ie., id_str
using twitter search api do the following query
[q="to:$tweeterusername", sinceId = $tweetId]
Loop all the results , the results matching the in_reply_to_status_id_str to $tweetid is the replies for the post.
From what I understand, there's not a way to do that directly (at least not now). Seems like something that should be added. They recently added some 'retweet' capabilities, seem logical to add this as well.
Here's one possible way to do this, first sample tweet data (from status/show):
<status>
<created_at>Tue Apr 07 22:52:51 +0000 2009</created_at>
<id>1472669360</id>
<text>At least I can get your humor through tweets. RT #abdur: I don't mean this in a bad way, but genetically speaking your a cul-de-sac.</text>
<source>TweetDeck</source>
<truncated>false</truncated>
<in_reply_to_status_id></in_reply_to_status_id>
<in_reply_to_user_id></in_reply_to_user_id>
<favorited>false</favorited>
<in_reply_to_screen_name></in_reply_to_screen_name>
<user>
<id>1401881</id>
...
From status/show you can find the user's id. Then statuses/mentions_timeline will return a list of status for a user. Just parse that return looking for a in_reply_to_status_id matching the original tweet's id.
The Twitter API v2 supports this now using a conversation_id field. You can read more in the docs.
First, request the conversation_id field of the tweet.
https://api.twitter.com/2/tweets?ids=1225917697675886593&tweet.fields=conversation_id
Second, then search tweets using the conversation_id as the query.
https://api.twitter.com/2/tweets/search/recent?query=conversation_id:1225912275971657728
This is a minimal example, so you should add other fields as you need to the URL.
Twitter has an undocumented api called related_results. It will give you replies for the specified tweet id. Not sure how reliable it is as its experimental, however this is the same api call that is called on twitter web.
Use at your own risk. :)
https://api.twitter.com/1/related_results/show/172019363942117377.json?include_entities=1
For more info, check out this discussion on dev.twitter:
https://dev.twitter.com/discussions/293
Here is my solution. It utilizes Abraham's Twitter Oauth PHP library: https://github.com/abraham/twitteroauth
It requires you to know the Twitter user's screen_name attribute as well as the id_str attribute of the tweet in question. This way, you can get an arbitrary conversation feed from any arbitrary user's tweet:
*UPDATE: Refreshed code to reflect object access vs array access:
function get_conversation($id_str, $screen_name, $return_type = 'json', $count = 100, $result_type = 'mixed', $include_entities = true) {
$params = array(
'q' => 'to:' . $screen_name, // no need to urlencode this!
'count' => $count,
'result_type' => $result_type,
'include_entities' => $include_entities,
'since_id' => $id_str
);
$feed = $connection->get('search/tweets', $params);
$comments = array();
for ($index = 0; $index < count($feed->statuses); $index++) {
if ($feed->statuses[$index]->in_reply_to_status_id_str == $id_str) {
array_push($comments, $feed->statuses[$index]);
}
}
switch ($return_type) {
case 'array':
return $comments;
break;
case 'json':
default:
return json_encode($comments);
break;
}
}
Here I am sharing simple R code to fetch reply of specific tweet
userName = "SrBachchan"
##fetch tweets from #userName timeline
tweets = userTimeline(userName,n = 1)
## converting tweets list to DataFrame
tweets <- twListToDF(tweets)
## building queryString to fetch retweets
queryString = paste0("to:",userName)
## retrieving tweet ID for which reply is to be fetched
Id = tweets[1,"id"]
## fetching all the reply to userName
rply = searchTwitter(queryString, sinceID = Id)
rply = twListToDF(rply)
## eliminate all the reply other then reply to required tweet Id
rply = rply[!rply$replyToSID > Id,]
rply = rply[!rply$replyToSID < Id,]
rply = rply[complete.cases(rply[,"replyToSID"]),]
## now rply DataFrame contains all the required replies.
You can use twarc package in python to collect all the replies to a tweet.
twarc replies 824077910927691778 > replies.jsonl
Also, it is possible to collect all the reply chains (replies to the replies) to a tweet using command below:
twarc replies 824077910927691778 --recursive
Not in an easy pragmatic way. There is an feature request in for it:
http://code.google.com/p/twitter-api/issues/detail?id=142
There are a couple of third-party websites that provide APIs but they often miss statuses.
I've implemented this in the following way:
1) statuses/update returns id of the last status (if include_entities is true)
2) Then you can request statuses/mentions and filter the result by in_reply_to_status_id. The latter should be equal to the particular id from step 1
As states satheesh it works great. Here is REST API code what I used
ini_set('display_errors', 1);
require_once('TwitterAPIExchange.php');
/** Set access tokens here - see: https://dev.twitter.com/apps/ **/
$settings = array(
'oauth_access_token' => "xxxx",
'oauth_access_token_secret' => "xxxx",
'consumer_key' => "xxxx",
'consumer_secret' => "xxxx"
);
// Your specific requirements
$url = 'https://api.twitter.com/1.1/search/tweets.json';
$requestMethod = 'GET';
$getfield = '?q=to:screen_name&sinceId=twitter_id';
// Perform the request
$twitter = new TwitterAPIExchange($settings);
$b = $twitter->setGetfield($getfield)
->buildOauth($url, $requestMethod)
->performRequest();
$arr = json_decode($b,TRUE);
echo "Replies <pre>";
print_r($arr);
die;
I came across the same issue a few months ago at work, as I was previously using their related_tweets endpoint in REST V1.
So I had to create a workaround, which I have documented here:
http://adriancrepaz.com/twitter_conversations_api Mirror - Github fork
This class should do exactly what you want.
It scrapes the HTML of the mobile site, and parses a conversation. I've used it for a while and it seems very reliable.
To fetch a conversation...
Request
<?php
require_once 'acTwitterConversation.php';
$twitter = new acTwitterConversation;
$conversation = $twitter->fetchConversion(324215761998594048);
print_r($conversation);
?>
Response
Array
(
[error] => false
[tweets] => Array
(
[0] => Array
(
[id] => 324214451756728320
[state] => before
[username] => facebook
[name] => Facebook
[content] => Facebook for iOS v6.0 ? Now with chat heads and stickers in private messages, and a more beautiful News Feed on iPad itunes.apple.com/us/app/faceboo?
[date] => 16 Apr
[images] => Array
(
[thumbnail] => https://pbs.twimg.com/profile_images/3513354941/24aaffa670e634a7da9a087bfa83abe6_normal.png
[large] => https://pbs.twimg.com/profile_images/3513354941/24aaffa670e634a7da9a087bfa83abe6.png
)
)
[1] => Array
(
[id] => 324214861728989184
[state] => before
[username] => michaelschultz
[name] => Michael Schultz
[content] => #facebook good April Fools joke Facebook?.chat hasn?t changed. No new features.
[date] => 16 Apr
[images] => Array
(
[thumbnail] => https://pbs.twimg.com/profile_images/414193649073668096/dbIUerA8_normal.jpeg
[large] => https://pbs.twimg.com/profile_images/414193649073668096/dbIUerA8.jpeg
)
)
....
)
)
since statuses/mentions_timeline will return the 20 most recent mention this won't be that efficient to call, and it has limitations like 75 requests per window (15min) , insted of this we can use user_timeline
the best way: 1. get the screen_name or user_id parameters From status/show.
2. now use user_timeline
GET https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=screen_name&count=count
(screen_name== name which we got From status/show)
(count== 1 to max 200)
count: Specifies the number of Tweets to try and retrieve, up to a maximum of 200 per distinct request.
from the result Just parse that return looking for an in_reply_to_status_id matching the original tweet's id.
Obviously, it's not ideal, but it will work.
If you need all replies related to one user for ANY DATE RANGE, and you only need to do it once (like for downloading your stuff), it is doable.
Make a Twitter development application
Apply for elevated credentials. You will instantly get them after filling out the forms. At least I did on two separate accounts today.
Your development account now has access to the v1.1 API search in the "Sandbox" tier. You get 50 requests against the tweets/search/fullarchive endpoint maxing out at 5000 returned tweets.
Make an environment for your development application.
Make a script to query https://api.twitter.com/1.1/tweets/search/fullarchive/<env name>.json where <env name> is the name of your environment. Make your query to:your_twitter_username and fromDate when you created your account, toDate today.
Iterate over the results
This will not get your replies recursively

Resources