I would like to get the number of pull requests and issues for a particularly GitHub rep. At the moment the method I'm using is really clumsy.
Using the octokit gem and the following code:
# Builds data that is sent to the API
def request_params
data = { }
# labels example: "bug,invalid,question"
data["labels"] = labels.present? ? labels : ""
# filter example: "assigned" "created" "mentioned" "subscribed" "all"
data["filter"] = filter
# state example: "open" "closed" "all"
data["state"] = state
return data
end
Octokit.auto_paginate = true
github = Octokit::Client.new(access_token: oauth_token)
github.list_issues("#{user}/#{repository}", request_params).count
The data received is extremely big, so its very ineficient in terms of memory. I don't need data regarding the issues only how many are there, X issues ( based on the filters / state / labels ).
I thought of a solution but was not able to implement it.
Basically: do 1 request to get the header, in the header there should be a link to the last page. Then make 1 more request to the last page, and check how many issues are there. Then we can calculate:
count = ( number of pages * (issues-per-page - 1) ) + issues-on-last-page
But I did not found out how to get request header information from octokit Authentificated Client.
If there is a simple way of doing it without octokit, I will happily use it.
Note: I want to fix this issue because the number of pull requests is quite high, and the code above generates R14 errors on Heroku.
Thank You!
I feel an easy way is to use the GitHub API and restrict the number of PRs you want displayed in a page by using the per_page filter. For example: to find out all the PRs of the repo OneGet/oneget you can use.. https://api.github.com/search/issues?q=repo:OneGet/oneget+type:pr&per_page=1. The JSON response has the field "total_count" which gives the count of the total number of PRs. And the response will be relatively light since it will have only one issue listed.
Ref: Search Issues
Related
I am attempting to update a table using a python library to iterate through the table rows.
I get this error: "Error Message: The API you are trying to use could not be found. It may be available in a newer version of Excel."
Adding rows succeeds, but any APIs on the rows endpoint doesn't work, I can't get range or update a row. I even tried going directly to requests to have more control over what gets passed. I tried both the v1.0 and beta endpoints as well.
https://learn.microsoft.com/en-us/graph/api/tablerow-update?view=graph-rest-1.0&tabs=http
Here is the URL Endpoint I am calling:
https://{redacted}/items/{file_id}/workbook/tables/Table1/rows/0
Any help is appreciated.
Update to add code (you have to have an existing authenticated requests session to run it in python):
data = {'values': [5, 6, 7]}
kwargs = {
'data': json.dumps(data),
'headers': {
'workbook-session-id': workbook.session.session_id,
'Content-type': 'application/json'}}
# Works
sharepoint = 'onevmw.sharepoint.com,***REDACTED***'
drive = '***REDACTED***'
item = '****REDACTED***'
base_url = f'https://graph.microsoft.com/v1.0//sites/{sharepoint}/drives/{drive}/items/{item}'
get_url = f"{base_url}/workbook/tables/{test_table.name}/rows"
session = office_connection.account.connection.get_session(load_token=True)
get_response: requests.Response = session.request(method='get', url=get_url)
print(get_response.text)
# Doesn't work
url = f"{base_url}/workbook/tables/{test_table.name}/rows/1"
response: requests.Response = session.request(method='patch', url=url, **kwargs)
print(response.text)
That's an issue. Unfortunately it not documented at the moment in the offical documentation.
I could make it work by changing the url from ".../rows/1" as ".../rows/itemAt(index=1)"
Posting the C# solution for others since the Msft Docs are incorrect and the actual solution is similar to #Amandeep's answer for javascript.
The docs (incorrectly) say:
...Tables["table_name"].Rows["row_num"].Request().UpdateAsync(); // incorrect!
Correct way:
...Tables["table_name"].Rows.ItemAt(123).Request().PatchAsync(wbRow); // correct!
Note the .ItemAt method takes an int, not a string.
I'm working on a website to load multiple youtube channels live streams. At first i was trying to figure out a way to do this without utilizing youtube's api but have decided to give in.
To find whether a channel is live streaming and to get the live stream links I've been using:
https://www.googleapis.com/youtube/v3/search?part=snippet&channelId={CHANNEL_ID}&eventType=live&maxResults=10&type=video&key={API_KEY}
However with the minimum quota being 10000 and each search being worth 100, Im only able to do about 100 searches before I exceed my quota limit which doesn't help at all. I ended up exceeding the quota limit in about 10 minutes. :(
Does anyone know of a better way to figure out if a channel is currently live streaming and what the live stream links are, using as minimal quota points as possible?
I want to reload youtube data for each user every 3 minutes, save it into a database, and display the information using my own api to save server resources as well as quota points.
Hopefully someone has a good solution to this problem!
If nothing can be done about links just determining if the user is live without using 100 quota points each time would be a big help.
Since the question only specified that Search API quotas should not be used in finding out if the channel is streaming, I thought I would share a sort of work-around method. It might require a bit more work than a simple API call, but it reduces API quota use to practically nothing:
I used a simple Perl GET request to retrieve a Youtube channel's main page. Several unique elements are found in the HTML of a channel page that is streaming live:
The number of live viewers tag, e.g. <li>753 watching</li>. The LIVE NOW
badge tag: <span class="yt-badge yt-badge-live" >Live now</span>.
To ascertain whether a channel is currently streaming live requires a simple match to see if the unique HTML tag is contained in the GET request results. Something like: if ($get_results =~ /$unique_html/) (Perl). Then, an API call can be made only to a channel ID that is actually streaming, in order to obtain the video ID of the stream.
The advantage of this is that you already know the channel is streaming, instead of using thousands of quota points to find out. My test script successfully identifies whether a channel is streaming, by looking in the HTML code for: <span class="yt-badge yt-badge-live" > (note the weird extra spaces in the code from Youtube).
I don't know what language OP is using, or I would help with a basic GET request in that language. I used Perl, and included browser headers, User Agent and cookies, to look like a normal computer visit.
Youtube's robots.txt doesn't seem to forbid crawling a channel's main page, only the community page of a channel.
Let me know what you think about the pros and cons of this method, and please comment with what might be improved rather than disliking if you find a flaw. Thanks, happy coding!
2020 UPDATE
The yt-badge-live seems to have been deprecated, it no longer reliably shows whether the channel is streaming. Instead, I now check the HTML for this string:
{"text":" watching"}
If I get a match, it means the page is streaming. (Non-streaming channels don't contain this string.) Again, note the weird extra whitespace. I also escape all the quotation marks since I'm using Perl.
Here are my two suggestions:
Check my answer where I explain how you can check how retrieve videos from channels who are livesrteaming.
Another option could be use the following URL and somehow make request(s) each time for check if there's a livestreaming.
https://www.youtube.com/channel/<CHANNEL_ID>/live
Where CHANNEL_ID is the channel id you want check if that channel is livestreaming1.
1 Just notice that maybe the URL wont work in all channels (and that depends of the channel itself).
For example, if you check the channel_id UC7_YxT-KID8kRbqZo7MyscQ - link to this channel livestreaming - https://www.youtube.com/channel/UC4nprx9Vd84-ly7N-1Ce6Og/live, this channel will show if he is livestreaming, but, with his channel id UC4nprx9Vd84-ly7N-1Ce6Og - link to this channel livestreaming -, it will show his main page instead.
Adding to the answer by Bman70, I tried eliminating the need of making a costly search request after knowing that the channel is streaming live. I did this using two indicators in the HTML response from channels page who are streaming live.
function findLiveStreamVideoId(channelId, cb){
$.ajax({
url: 'https://www.youtube.com/channel/'+channelId,
type: "GET",
headers: {
'Access-Control-Allow-Origin': '*',
'Accept-Language': 'en-US, en;q=0.5'
}}).done(function(resp) {
//one method to find live video
let n = resp.search(/\{"videoId[\sA-Za-z0-9:"\{\}\]\[,\-_]+BADGE_STYLE_TYPE_LIVE_NOW/i);
//If found
if(n>=0){
let videoId = resp.slice(n+1, resp.indexOf("}",n)-1).split("\":\"")[1]
return cb(videoId);
}
//If not found, then try another method to find live video
n = resp.search(/https:\/\/i.ytimg.com\/vi\/[A-Za-z0-9\-_]+\/hqdefault_live.jpg/i);
if (n >= 0){
let videoId = resp.slice(n,resp.indexOf(".jpg",n)-1).split("/")[4]
return cb(videoId);
}
//No streams found
return cb(null, "No live streams found");
}).fail(function() {
return cb(null, "CORS Request blocked");
});
}
However, there's a tradeoff. This method confuses a recently ended stream with currently live streams. A workaround for this issue is to get status of the videoId returned from Youtube API (costs a single unit from your quota).
I found youtube API to be very restrictive given the cost of search operation. Apparently the accepted answer did not work for me as I found the string on non live streams as well. Web scraping with aiohttp and beautifulsoup was not an option since the better indicators required javascript support. Hence I turned to selenium. I looked for the css selector
#info-text
and then search for the string Started streaming or with watching now in it.
To reduce load on my tiny server that would have otherwise required lot more resources, I moved this test of functionality to a heroku dyno with a small flask app.
# import flask dependencies
import os
from flask import Flask, request, make_response, jsonify
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
base = "https://www.youtube.com/watch?v={0}"
delay = 3
# initialize the flask app
app = Flask(__name__)
# default route
#app.route("/")
def index():
return "Hello World!"
# create a route for webhook
#app.route("/islive", methods=["GET", "POST"])
def is_live():
chrome_options = Options()
chrome_options.binary_location = os.environ.get('GOOGLE_CHROME_BIN')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--remote-debugging-port=9222')
driver = webdriver.Chrome(executable_path=os.environ.get('CHROMEDRIVER_PATH'), chrome_options=chrome_options)
url = request.args.get("url")
if "youtube.com" in url:
video_id = url.split("?v=")[-1]
else:
video_id = url
url = base.format(url)
print(url)
response = { "url": url, "is_live": False, "ok": False, "video_id": video_id }
driver.get(url)
try:
element = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#info-text")))
result = element.text.lower().find("Started streaming".lower())
if result != -1:
response["is_live"] = True
else:
result = element.text.lower().find("watching now".lower())
if result != -1:
response["is_live"] = True
response["ok"] = True
return jsonify(response)
except Exception as e:
print(e)
return jsonify(response)
finally:
driver.close()
# run the app
if __name__ == "__main__":
app.run()
You'll however need to add the following buildpacks in settings
https://github.com/heroku/heroku-buildpack-google-chrome
https://github.com/heroku/heroku-buildpack-chromedriver
https://github.com/heroku/heroku-buildpack-python
Set the following Config Vars in settings
CHROMEDRIVER_PATH=/app/.chromedriver/bin/chromedriver
GOOGLE_CHROME_BIN=/app/.apt/usr/bin/google-chrome
You can find supported python runtime here but anything below python 3.9 should be good since selenium had problems with improper use of is operator
I hope youtube will provide better alternatives than workarounds.
I know this is a old thread, but i thought i share my way of checking to for example grab the status code to use in an app.
This is for a single Channel, but you could easly do a foreach with it.
<?php
#####
$ytchannelID = "UCd0BTXriKLvOs1ANx3puZ3Q";
#####
$ytliveurl = "https://www.youtube.com/channel/".$ytchannelID."/live";
$ytchannelLIVE = '{"text":" watching now"}';
$contents = file_get_contents($ytliveurl);
if ( strpos($contents, $ytchannelLIVE) !== false ){http_response_code(200);} else {http_response_code(201);}
unset($ytliveurl);
?>
Adding onto the other answers here, I use a GET request to https://www.youtube.com/c/<CHANNEL_NAME>/live and then search for "isLive":true (rather than {"text":" watching"})
I've been trying to get this to work for probably 6 hours now to no avail, read every stackoverflow question I could find on the topic.
I'm trying to get 100, 200, or maybe 500 photos from a single tag:
func hashtags(hashtag: String, nextMaxTagId: String?) -> RequestParamters {
var params = "/tags/\(hashtag)/media/recent|access_token=\(accessToken)"
var parameters = Dictionary<String, AnyObject>()
parameters["access_token"] = accessToken
let urlString = "https://api.instagram.com/v1/tags/\(hashtag)/media/recent"
if let nextMaxTagId = nextMaxTagId {
params += "|max_tag_id=\(nextMaxTagId)"
parameters["max_tag_id"] = nextMaxTagId
}
let sig = HMAC.signWithKey(C.InstagramClientSecret(), usingData: params)
parameters["sig"] = sig
return (urlString: urlString, parameters: parameters)
}
This is what I use to construct my urls and parameters for my request. My first request does not have a nextMaxTagId, and that request goes through, returns 20 images and a pagination json.
Then, when I extract the next_max_tag_id from the pagination block, and create a request using that parameter, I get another 20 images, but they are the same images as before and now I do not get a pagination block.
I am signing my requests correctly (as all my other API requests throughout the app go through no problem) and I am not in Sandbox mode.
Edit: I've also tried using min_tag_id=\(nextMinTagId), still do not receive pagination in the next request.
Seems like:
1) You are using the Instagram Developer API with what seems like an authorized APIKey, and you mentioned you are NOT in Sandbox, so you're in a the Production environment for that api.
I'm trying to get 100, 200, or maybe 500 photos from a single tag
2) This means, combined with returns 20 images and a pagination json, that for 100, you need to make 5 calls minimum (100/20 == 5), 200 == 10, 500 = 25.
3) According to the developer documentation rate limits, the overall cap on Production is 5000 req/hour, with several APIs restricted to a much smaller limit (some are 30/60 req/hour). I'm not sure I see the exact tag rate limit you are hitting, but since the question mentions:
for probably 6 hours now to no avail
it's also possible you've just been hitting the overall hourly request limit each hour.
I definitely know that this is not an answer that I enjoy giving, because it's essentially saying: you're stuck. I've actually played with the rate limits myself before, and I find them extremely limiting (pun fully intended). The only other option, albeit not as "above board", is to scrape Instagram itself for the information you need. I say it's not as "above board" because if you needed info not found on a web scrape, you could theoretically scrape the mobile API through some minor reverse engineering (ie using an HTTP proxy to spoof mobile traffic systematically).
In the end, the API Instagram publishes is definitely very limited, and will face rate limits for the foreseeable future (unless you can get those somehow lifted in a specific partnership they somehow deem worthy, although I'm not sure how this could be approached).
I am using Zend's gdata library for the Google Apps provisioning API. Since Zend doesn't yet support fetching org users (no retrieve function provided by the library for this feed), I am making a custom gdata query to the url (as suggested in the documentation developers.google.com/google-apps/provisioning/#retrieving_organization_users_experimental):
apps-apis.google.com/a/feeds/orguser/2.0/'.$customerId.'?get=all
This works well for <= 100 users.
Now, I have created a domain with 125 users across 5 OUs. When I fetch the above URI, I get the 1st 100 users (as documented and expected). However, I could not find the pagination link mentioned here: developers.google.com/google-apps/provisioning/reference#Results_Pagination
Here's the start of my orguser feed:
<?xml version='1.0' encoding='UTF-8'?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:apps='http://schemas.google.com/apps/2006'><id>https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxxxx</id><updated>2013-01-06T08:17:43.520Z</updated><**link rel='next' type='application/atom+xml' href='https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxxxx?get=all&startKey=RASS03jtnz0s2orxmbn.**'/><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxx'/>
I tried the https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxxxx?get=all&startKey=RASS03jtnz0s2orxmbn. link but it gives me the exact same 100 users that the https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxxxx?get=all link gives. This is the only occurrence of the word "next" in my feed and so there is not other URI I can try to fetch the next 25 users.
So I have only been able to get 100 users from this API call. How do I go about fetching the next 25 users? Examples/code would be really appreciated. Or what am I doing wrong?
Please help - this is blocking an urgent delivery.
Thanks!,
Vinay.
Your 2nd request should look like:
https://apps-apis.google.com/a/feeds/orguser/2.0/C00xxxxxx?startKey=RASS03jtnz0s2orxmbn&get=all
startKey should be set to the value of the next parameter and get should continue to be all for each page request.
Also, make sure the URL is decoded, if & is encoded as & in the request, then Google's servers will see all of all&startKey=RASS03jtnz0s2orxmbn as the value of get and it won't see a startKey parameter at all.
I am trying to do a simple Salesforce-Asana integration. I have many functions working, but I am having trouble with adding a tag to a workspace. Since I can't find documentation on the addTag method, I'm sort of guessing at what is required.
If I post the following JSON to https://app.asana.com/api/1.0/workspaces/WORKSPACEID/tasks:
{"data":{"name":"MyTagName","notes":"Test Notes"}}
The tag gets created in Asana, but with blank notes and name fields. If I try to get a bit more fancy and post:
{"data":{"name":"MyTagName","notes":"Test Notes","followers":[{"id":"MY_USER_ID"}]}}
I receive:
{"errors":[{"message":"Invalid field: {\"data\":{\"name\":\"MyTagName\",\"notes\":\"Test Notes\",\"followers\":[{\"id\":\"MY_USER_ID\"}]}}"}]}
I'm thinking the backslashes may mean that my request is being modified by the post, though debug output shows a properly formatted json string before the post.
Sample Code:
JSONGenerator jsongen = JSON.createGenerator(false);
jsongen.writeStartObject();
jsongen.writeFieldName('data');
jsongen.writeStartObject();
jsongen.writeStringField('name', 'MyTagName');
jsongen.writeStringField('notes', 'Test Notes');
jsongen.writeFieldName('followers');
jsongen.writeStartArray();
jsongen.writeStartObject();
jsongen.writeStringField('id', 'MY_USER_ID');
jsongen.writeEndObject();
jsongen.writeEndArray();
jsongen.writeEndObject();
jsongen.writeEndObject();
String requestbody = jsongen.getAsString();
HttpRequest req = new HttpRequest();
req.setEndpoint('https://app.asana.com/api/1.0/workspaces/WORKSPACEID/tags');
req.setMethod('POST');
//===Auth header created here - working fine===
req.setBody(requestbody);
Http http = new Http();
HTTPResponse res = http.send(req);
return res.getBody();
Any help appreciated. I am inexperienced using JSON as well as the Asana API.
The problem was that I was posting to the wrong endpoint. Instead of workspaces/workspaceid/tags, I should have been using /tags and including workspaceid in the body of the request.
Aha, so you can add tags and even set followers despite the API not mentioning that you can or claiming that followers are read-only.
So to sum up for anyone else interested: POSTing to the endpoint https://app.asana.com/api/1.0/tags you can create a tag like this:
{ "data" : { "workspace": 1234567, "name" : "newtagname", "followers": [45678, 6789] } }
where 1234567 is your workspace ID and 45678 and 6789 are your new followers.
Since you posted this question, Asana's API and developer has introduced Tags. You documentation lays out the answer to your question pretty clearly:
https://asana.com/developers/api-reference/tags
I'm a bit confused by your question. Your ask "how to add a tag" but the first half of your question talks about adding a task. The problem with what you describe there is that you are trying to set a task's followers but the followers field is currently read-only according to Asana's API documentation. That is why you are getting an error. You can not set followers with the API right now.
The second part of your question - with the sample code - does look like you are trying to add a tag. However, right now the Asana API does not support this (at least according to the API documentation). You can update an existing tag but you can't add one.
So, to sum up: at this time the API does not allow you to add followers to a task or to create new tags.