Instagram Graph API Recent Search result returns blank data - instagram-graph-api

thank you for reviewing my question.
I've been using Instagram Graph API to make some hashtag recent search.
# Retrieve Keys
key = json.loads(keycontent)
HASHTAG_ID = key['HASHTAG_ID']
USER_ID = key['USER_ID']
ACCESS_TOKEN = key['ACCESS_TOKEN']
CURSOR = key['CURSOR']
topic = 'HASHTAG_ID' # Job
# Get Request URL
url = f"https://graph.facebook.com/{HASHTAG_ID}/recent_media?fields=id,permalink,caption,media_url&limit=50&user_id={USER_ID}&access_token={ACCESS_TOKEN}"
if CURSOR != "":
url = url + '&after=' + CURSOR
res = requests.get(url)
print(res.json()[‘data’])
It works quite successfully, but problem turns out that it starts to return blank data after calling the function several times. The data I am receiving at the moment are equal to either of the followings:
{"data": []}
{
"data": [
],
"paging": {
"cursors": {
"after": "NEXT_CURSOR"
},
"next": "LINK_WITH_NEXT_CURSOR"
}
}
I've checked several known issues, and the list of what I've checked are stated below.
It is not the permission issue. I've checked all the permissions related, and it is confirmed that the app has all permission it needs.
The app is certainly below the execution limit. Actually, it is mostly even below half of it.
The app is also below the number of hashtags I can search. I've called significantly lower than 30 hashtags within 7 days.
So, I'd like to know if there are potential reasons that I am having blank data for Instagram Graph API call.
Thank you in advance.

Related

Unable to update Table Row Values

I am attempting to update a table using a python library to iterate through the table rows.
I get this error: "Error Message: The API you are trying to use could not be found. It may be available in a newer version of Excel."
Adding rows succeeds, but any APIs on the rows endpoint doesn't work, I can't get range or update a row. I even tried going directly to requests to have more control over what gets passed. I tried both the v1.0 and beta endpoints as well.
https://learn.microsoft.com/en-us/graph/api/tablerow-update?view=graph-rest-1.0&tabs=http
Here is the URL Endpoint I am calling:
https://{redacted}/items/{file_id}/workbook/tables/Table1/rows/0
Any help is appreciated.
Update to add code (you have to have an existing authenticated requests session to run it in python):
data = {'values': [5, 6, 7]}
kwargs = {
'data': json.dumps(data),
'headers': {
'workbook-session-id': workbook.session.session_id,
'Content-type': 'application/json'}}
# Works
sharepoint = 'onevmw.sharepoint.com,***REDACTED***'
drive = '***REDACTED***'
item = '****REDACTED***'
base_url = f'https://graph.microsoft.com/v1.0//sites/{sharepoint}/drives/{drive}/items/{item}'
get_url = f"{base_url}/workbook/tables/{test_table.name}/rows"
session = office_connection.account.connection.get_session(load_token=True)
get_response: requests.Response = session.request(method='get', url=get_url)
print(get_response.text)
# Doesn't work
url = f"{base_url}/workbook/tables/{test_table.name}/rows/1"
response: requests.Response = session.request(method='patch', url=url, **kwargs)
print(response.text)
That's an issue. Unfortunately it not documented at the moment in the offical documentation.
I could make it work by changing the url from ".../rows/1" as ".../rows/itemAt(index=1)"
Posting the C# solution for others since the Msft Docs are incorrect and the actual solution is similar to #Amandeep's answer for javascript.
The docs (incorrectly) say:
...Tables["table_name"].Rows["row_num"].Request().UpdateAsync(); // incorrect!
Correct way:
...Tables["table_name"].Rows.ItemAt(123).Request().PatchAsync(wbRow); // correct!
Note the .ItemAt method takes an int, not a string.

How to retrieve Slack messages via API identified by permalink?

I'm trying to retrieve a list of Slack reminders, which works fine using Slack API's reminders.list method. However, reminders that are set using SlackBot (i.e. by asking Slackbot to remind me of a message) return the respective permalink of that message as text:
{
"ok": true,
"reminders": [
{
"id": "Rm012C299C1E",
"creator": "UV09YANLX",
"text": "https:\/\/team.slack.com\/archives\/DUNB811AM\/p1583441290000300",
"user": "UV09YANLX",
"recurring": false,
"time": 1586789303,
"complete_ts": 0
},
Instead of showing the permalink, I'd naturally like to show the message I wanted to be reminded of. However, I couldn't find any hints in the Slack API docs on how to retrieve a message identified by a permalink. The link is presumably generated by chat.getPermalink, but there seems to be no obvious chat.getMessageByPermalink or so.
I tried to interpet the path elements as channel and timestamp, but the timestamp (transformed from the example above: 1583441290.000300) doesn't seem to really match. At least I don't end up with the message I expected to retrieve when passing this as latest to conversations.history and limiting to 1.
After fiddling a while longer, here's how I finally managed in JS:
async function downloadSlackMsgByPermalink(permalink) {
const pathElements = permalink.substring(8).split('/');
const channel = pathElements[2];
var url;
if (permalink.includes('thread_ts')) {
// Threaded message, use conversations.replies endpoint
var ts = pathElements[3].substring(0, pathElements[3].indexOf('?'));
ts = ts.substring(0, ts.length-6) + '.' + ts.substring(ts.length-6);
var latest = pathElements[3].substring(pathElements[3].indexOf('thread_ts=')+10);
if (latest.indexOf('&') != -1) latest = latest.substring(0, latest.indexOf('&'));
url = `https://slack.com/api/conversations.replies?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&ts=${ts}&latest=${latest}&inclusive=true&limit=1`;
} else {
// Non-threaded message, use conversations.history endpoint
var latest = pathElements[3].substring(1);
if (latest.indexOf('?') != -1) latest = latest.substring(0, latest.indexOf('?'));
latest = latest.substring(0, latest.length-6) + '.' + latest.substring(latest.length-6);
url = `https://slack.com/api/conversations.history?token=${encodeURIComponent(slackAccessToken)}&channel=${channel}&latest=${latest}&inclusive=true&limit=1`;
}
const response = await fetch(url);
const result = await response.json();
if (result.ok === true) {
return result.messages[0];
}
}
It's not been tested to the latest extend, but first results look alright:
The trick with the conversations.history endpoint was to include the inclusive=true parameter
Messages might be threaded - the separate endpoint conversations.replies is required to fetch those
As the Slack API docs state: ts and thread_ts look like timestamps, but they aren't. Using them a bit like timestamps (i.e. cutting off some characters at the back and inserting a dot) seems to work, gladly, however.
Naturally, the slackAccessToken variable needs to be set beforehand
I'm aware the way to extract & transform the URL components in the code above might not the most elegant solution, but it proves the concept :-)

Octokit GitHub API

I would like to get the number of pull requests and issues for a particularly GitHub rep. At the moment the method I'm using is really clumsy.
Using the octokit gem and the following code:
# Builds data that is sent to the API
def request_params
data = { }
# labels example: "bug,invalid,question"
data["labels"] = labels.present? ? labels : ""
# filter example: "assigned" "created" "mentioned" "subscribed" "all"
data["filter"] = filter
# state example: "open" "closed" "all"
data["state"] = state
return data
end
Octokit.auto_paginate = true
github = Octokit::Client.new(access_token: oauth_token)
github.list_issues("#{user}/#{repository}", request_params).count
The data received is extremely big, so its very ineficient in terms of memory. I don't need data regarding the issues only how many are there, X issues ( based on the filters / state / labels ).
I thought of a solution but was not able to implement it.
Basically: do 1 request to get the header, in the header there should be a link to the last page. Then make 1 more request to the last page, and check how many issues are there. Then we can calculate:
count = ( number of pages * (issues-per-page - 1) ) + issues-on-last-page
But I did not found out how to get request header information from octokit Authentificated Client.
If there is a simple way of doing it without octokit, I will happily use it.
Note: I want to fix this issue because the number of pull requests is quite high, and the code above generates R14 errors on Heroku.
Thank You!
I feel an easy way is to use the GitHub API and restrict the number of PRs you want displayed in a page by using the per_page filter. For example: to find out all the PRs of the repo OneGet/oneget you can use.. https://api.github.com/search/issues?q=repo:OneGet/oneget+type:pr&per_page=1. The JSON response has the field "total_count" which gives the count of the total number of PRs. And the response will be relatively light since it will have only one issue listed.
Ref: Search Issues

What is the maximum HTTP GET request length for a YouTube API?

I want to use youtube video:list api to get details of multiple videos in single request. As per the api documentation, I can send comma separated videoId list as id parameter. But what is the maximum length possible?
I know the GET request limit is dependent on both the server and the client. In my case I am making the request from server-side and not from browser. Hence the maximum length could be configured on my end. But what is the maximum length acceptable for youtube?
UPDATE: Though official documentation couldn't find, current limit is 50 ids from the tests performed as explained by Tempus. I am adding a code below with 51 different video ids (1 is commented) for those who want to check this in future.
var key = prompt("Please enter your key here");
if (!key) {
alert("No key entered");
} else {
var videoIds = ["RgKAFK5djSk",
"fRh_vgS2dFE",
"OPf0YbXqDm0",
"KYniUCGPGLs",
"e-ORhEE9VVg",
"nfWlot6h_JM",
"NUsoVlDFqZg",
"YqeW9_5kURI",
"YQHsXMglC9A",
"CevxZvSJLk8",
"09R8_2nJtjg",
"HP-MbfHFUqs",
"7PCkvCPvDXk",
"0KSOMA3QBU0",
"hT_nvWreIhg",
"kffacxfA7G4",
"DK_0jXPuIr0",
"2vjPBrBU-TM",
"lp-EO5I60KA",
"5GL9JoH4Sws",
"kOkQ4T5WO9E",
"AJtDXIazrMo",
"RBumgq5yVrA",
"pRpeEdMmmQ0",
"YBHQbu5rbdQ",
"PT2_F-1esPk",
"uelHwf8o7_U",
"KQ6zr6kCPj8",
"IcrbM1l_BoI",
"vjW8wmF5VWc",
"PIh2xe4jnpk",
"QFs3PIZb3js",
"TapXs54Ah3E",
"uxpDa-c-4Mc",
"oyEuk8j8imI",
"ebXbLfLACGM",
"kHSFpGBFGHY",
"CGyEd0aKWZE",
"rYEDA3JcQqw",
"fLexgOxsZu0",
"450p7goxZqg",
"ASO_zypdnsQ",
"t4H_Zoh7G5A",
"QK8mJJJvaes",
"QcIy9NiNbmo",
"yzTuBuRdAyA",
"L0MK7qz13bU",
"uO59tfQ2TbA",
"kkx-7fsiWgg",
"EgqUJOudrcM",
// "60ItHLz5WEA" // 51st VideoID. Uncomment it to see error
];
var url = "https://www.googleapis.com/youtube/v3/videos?part=statistics&key=" + key + "&id=" + videoIds.join(",");
var xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = function() {
(xmlHttp.readyState == 4) && alert("HTTP Status code: " + xmlHttp.status);
}
xmlHttp.open("GET", url, true);
xmlHttp.send(null);
}
The answer is 50. Reason being, is that is all you will get back.
As some calls can have quite a few results depending on search criteria and available results, they have capped the "maxResults" at 50.
Acception to this is the CommentThreads which are up to 100.
This is (as you can work out) to speed page loads and call times.
EDIT:
This can be tested out HERE in the "Try api" part.
You will need to put 50 videoID's into the "id" field separated by coma's.
Then ad one more ID to get 51 and test again. You should receive a "400" response.
P.S. they do not need to be unique ID's. So have a few and then copy and paste as many times as needed ;-)

Instagram /tags/\(hashtag)/media/recent endpoint not returning pagination?

I've been trying to get this to work for probably 6 hours now to no avail, read every stackoverflow question I could find on the topic.
I'm trying to get 100, 200, or maybe 500 photos from a single tag:
func hashtags(hashtag: String, nextMaxTagId: String?) -> RequestParamters {
var params = "/tags/\(hashtag)/media/recent|access_token=\(accessToken)"
var parameters = Dictionary<String, AnyObject>()
parameters["access_token"] = accessToken
let urlString = "https://api.instagram.com/v1/tags/\(hashtag)/media/recent"
if let nextMaxTagId = nextMaxTagId {
params += "|max_tag_id=\(nextMaxTagId)"
parameters["max_tag_id"] = nextMaxTagId
}
let sig = HMAC.signWithKey(C.InstagramClientSecret(), usingData: params)
parameters["sig"] = sig
return (urlString: urlString, parameters: parameters)
}
This is what I use to construct my urls and parameters for my request. My first request does not have a nextMaxTagId, and that request goes through, returns 20 images and a pagination json.
Then, when I extract the next_max_tag_id from the pagination block, and create a request using that parameter, I get another 20 images, but they are the same images as before and now I do not get a pagination block.
I am signing my requests correctly (as all my other API requests throughout the app go through no problem) and I am not in Sandbox mode.
Edit: I've also tried using min_tag_id=\(nextMinTagId), still do not receive pagination in the next request.
Seems like:
1) You are using the Instagram Developer API with what seems like an authorized APIKey, and you mentioned you are NOT in Sandbox, so you're in a the Production environment for that api.
I'm trying to get 100, 200, or maybe 500 photos from a single tag
2) This means, combined with returns 20 images and a pagination json, that for 100, you need to make 5 calls minimum (100/20 == 5), 200 == 10, 500 = 25.
3) According to the developer documentation rate limits, the overall cap on Production is 5000 req/hour, with several APIs restricted to a much smaller limit (some are 30/60 req/hour). I'm not sure I see the exact tag rate limit you are hitting, but since the question mentions:
for probably 6 hours now to no avail
it's also possible you've just been hitting the overall hourly request limit each hour.
I definitely know that this is not an answer that I enjoy giving, because it's essentially saying: you're stuck. I've actually played with the rate limits myself before, and I find them extremely limiting (pun fully intended). The only other option, albeit not as "above board", is to scrape Instagram itself for the information you need. I say it's not as "above board" because if you needed info not found on a web scrape, you could theoretically scrape the mobile API through some minor reverse engineering (ie using an HTTP proxy to spoof mobile traffic systematically).
In the end, the API Instagram publishes is definitely very limited, and will face rate limits for the foreseeable future (unless you can get those somehow lifted in a specific partnership they somehow deem worthy, although I'm not sure how this could be approached).

Resources