LinkedIn Api: I'm getting "Access to people search denied." - oauth

I'm using the LinkedIn Api, to search for people information,
I'm keep hit the limits per user.
(even though I want only public info, it seems it is mandatory to send request with a logged-in user token).
So I register some people to my app, to get more tokens, but when I use the tokens, I'm getting:
"Access to people search denied."
What can be the cause?

Not enough info is provided to solve the problem but LinkedIn has clear documentation on their throttle limits. It's never been a problem for me so I think you may be making unnecessary API calls. I know that the data that the API returns is very deeply nested and you may not need to make so many API calls to get the data you need. I would print one single API response to a terminal and inspect the data. I use Python and will usually have to dig 3 or 4 levels to get the data I need. Ex.
updates = app.get_company_updates(COMPANY_ID, params={'count': count, 'event-type': 'status-update',})
update_list = []
for update in updates['values']:
my_dict = {'comment': update['updateContent']['companyStatusUpdate']['share']['comment']}
if update['updateContent']['companyStatusUpdate']['share'].get('content'):
my_dict['description'] = update['updateContent']['companyStatusUpdate']['share']['content'].get('description')
my_dict['submittedImageUrl'] = update['updateContent']['companyStatusUpdate']['share']['content'].get('submittedImageUrl')
my_dict['title'] = update['updateContent']['companyStatusUpdate']['share']['content'].get('title')
my_dict['shortenedUrl'] = update['updateContent']['companyStatusUpdate']['share']['content'].get('shortenedUrl')
update_list.append(my_dict)
But if you really need to hit the API a lot then you might want to set up a bunch of scripts to run on a cron jobs daily with each script using a unique developer login.

Related

Get all TI Indicators returns an empty list

I am trying to collect all active TIs via the Beta Graph API by following this. But it doesn't return anything. Here is what I use in Postman:
https://graph.microsoft.com/beta/security/tiIndicators
Response (200):
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#security/tiIndicators",
"value": []
}
A bit of context for the environment I work in.
The tenant has multiple Sentinel workspaces & resource groups.
The application I use has the correct permissions:
ThreatIndicators.Read.All
ThreatIndicators.ReadWrite.OwnedBy
ThreatSubmission.Read.All
ThreatSubmission.ReadWrite.All
It is my current belief that this might be due to the limitations of the Beta API. My reasoning is that accourding to this documentation you need the ThreatIndicators.ReadWrite.OwnedBy permission to access the API. This would suggest that currently you can only view TI's that the resource itself created.
If more info is needed just ask.
According to the documentation, ThreatIndicators.ReadWrite.OwnedBy permission allow you to manage threat indicators your app creates or owns.
If you want to read all the threat indicators for your organization then your app needs ThreatIndicators.Read.All permission.
Although this is not a solution to the question it is a workaround. By using the Log Analytics API you can get the TI via a KQL.
ThreatIntelligenceIndicator
| where ExpirationDateTime > now() and
NetworkIP matches regex #"^(?:(?:25[0-5]|(?:2[0-4]|1\d|[1-9]|)\d)\.?\b){4}$" and
ConfidenceScore > 25
| summarize by NetworkIP
This is probably better as you can also use a watchlist to exclude specific IP addresses with one request.
One thing I struggled with this was Authorization. You must give your Application permission to use the api.loganalytics.io API, and the application needs the Log Analytics Reader role in the Log Analytic workspace you want to use.

Use Tweepy to extract Twitter follower information (API incompatibility issues)?

I was following this tutorial, https://towardsdatascience.com/how-to-download-twitter-friends-or-followers-for-free-b9d5ac23812, which was written in 2021. It should've worked fine, however, they have to 'fix' the things that just work.
Specifically, running this line
for fid in Cursor(api.followers_ids, screen_name=screen_name, count=5000).items():
ids.append(fid)
gives the error:
"tweepy.error.TweepError: [{'message': 'You currently have Essential access which includes access to Twitter API v2 endpoints only. If you need access to this endpoint, you’ll need to apply for Elevated access via the Developer Portal. You can learn more here: https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve', 'code': 453}]"
I could have pulled the data in five minutes. Now debugging this already cost one hour+ because they just break the things that work. Is there anyway to make this old code snippet work? The application to use API 1.1 takes weeks, and I don't have time to watch their bad documents of how to migrate from API 1.1 to 2.0 and then the documents of migrating from Tweepy 3.9.0 to 4.0.0. Five minutes' task would just become half a day. Thanks in advance for any help.
First of all, have you at least tried to apply for the Elevated access?
It can take some time, it's true, but it can also be instantaneous.
The other solution would be to use the Twitter API V2.
You don't need any tutorial, just read the documentation:
Here for the authentication ;
Here for the retrieval of the followers ;
Here for the pagination.
And you should get something like that:
import tweepy
client = tweepy.Client("Bearer Token here")
paginator = tweepy.Paginator(
client.get_users_followers,
id=..., # ID only, no screename
max_results=1000
).flatten()
for follower in paginator:
print(follower.id)
Finally, even if I understand your frustration (and developing Twitter applications can be very frustrating), I think that you should try to keep it out your SO questions. Good luck!

Is there any Way to Not Get our App Blocked by Twitter

I have coded a Fine bot which Tweets every 150 seconds time.sleep(150) . I have made a APP from twitter with Read / Write Permissions . But after 30 Tweets, Twitter Blocks the Application. So is there any way to Bypass it. ? Or has someone ever Tried bots in Twitter. Their are some grammar Bots, RT's Bot in twitter which has almost 110k Tweets and they tweet every 30 seconds .. How do they bypass the Frame Limit Protection
Specific Error Restricted from performing write actions and Code Stops.
The docs at least don't specify the rate limit rules, only that there's one. But it does state that you can not have duplicate texts. Is that the case possibly? What HTTP Error are you receiving? Since they don't explicitely post the rules that apply to the rate limit, I'd suppose they might have internal algorithms to tell if it's bot-like behaviour, which I'm sure they do not want to allow. Especially if you set up a new app, regulations might be more strict.
Edit:
If you're completely certain that you're not firing up too many requests at once (e.g. check with fiddler to make sure), then twitter suggests to get in touch and check the email address of the associated account for any mail from the operations team in order to resolve possible misinterpretations.
This also might be useful: API developers: abuse prevention and security
I faced the exact same problem with my twitter bot. I basically made a bot to auto-reply to a specific user and ran that script on cron of 1 minute. Got blocked even though my replies weren't spam but AI generated replies from https://qnamaker.ai/
My script is a bit heavy so it is not feasible for me to keep it running 24*7 as it'll create overhead on server system. But since you keep running it(picked that up because of the delay function), one thing you can do is keep changing the time interval in which your bot makes a tweet. Make it random so that it doesn't look scheduled and bot-like.
You can use the random class in python for that.
import random
y=random.randint(100,150)
time.sleep(y)
This way your code resumes after random intervals of time. This prevented my bot from getting blocked for a long time. Hope this helps.
Your bot seems to be banned, their algorithms probably flaged your posting as similar to spam accounts.
You should contact Twitter and explain the situation, using the following page:
Twitter API Policy Support, give them the details of why they should allow your bot to act as you wish.

How do I get revenue reports from a YouTube CMS account using an API (for an MCN)?

I have access to a YouTube CMS account (for an MCN). On YouTube I can do lots and lots of things with it and this also includes downloading CSV reports which contain detailed information about earnings.
However I want to do some automatic processing of that data and thus access the data using an API instead of a manual CSV download. It looks like the YouTube Analytics Content Owner Reports should contain these data as well, thus I tried to get some data from this API (for now only using the API Explorer) but the only thing I was able to get was a "Forbidden" response.
The API Explorer tells me that for a CMS account I need to specify contentOwner==OWNER_NAME but there is nowhere an explanation what that OWNER_NAME would be. I tried to just insert the displayed name of my CMS account, replacing spaces with underscores, but no success. How do I find out what my owner name is?
Additionally, when I authenticate using OAuth I receive as usual the list of accounts where I can choose which one to use (e.g. all the YouTube channels I am a manager of), but the CMS account is not listed. However if I go to YouTube I can click on the top right corner and then switch to the CMS. No idea if that is important...
Then again, maybe I am totally on the wrong track, because I want to get the reports for all channels connected to my MCN but that does not mean that I own the content. So maybe I am no content owner? In this case: Which is the correct way to request the reports from the API?
First of all, the CMS account is not a separate account you can log in via Oath. It is more like a privilege and it is connected to one of your google/youtube accounts. This is in contrast to youtube's regular channel-management, where each channel has it's own login credentials.
I attached a screenshot of my youtube account-selector-view, where the CMS belongs to the account name#email.com, which is also the account you have to use for oauth authorization to access your CMS reports.
Furthermore you can see the name of the CMS, in this case it "CMSName". So, generally this is the name you would use for contentOwner==CMSName. However, your CMS Name seems to include whitespaces. Unfortunately, i cannot reconstruct this case because of missing admin-rights, but i would suggest you the _ for whitespaces too, because " " and "%20" do not map the regular expression for valid params.
But you said, that you had no success by trying it. But there are too error scenarios:
403 Forbidden: The name of the CMS could either be wrong or the selected OAth account does not have the required privileges. Do you have all required Scopes and selected the correct account?
400 Bad Request: This happens when the request is invalid per se. So if you choose contentOwner==CMSName as ids param, a filter parameter is always required, e.g. channel==[ChannelIdForWhichIHaveCMSRights]. So, a API request, that should generally work, would look like this: https://www.googleapis.com/youtube/analytics/v1/reports?ids=contentOwner%3D%3D[CONTENTOWNER_ID]&start-date=2015-01-01&end-date=2015-01-15&metrics=views&filters=channel%3D%3D[CHANNEL_ID_WITH_CMS_RIGHTS]&access_token=[OATH_TOKEN_FOR_RIGHT_ACCOUNT]
If both cases won't work for you and you're still getting 403 errors, let us do some debugging and try to fetch the content Owner Id. I will now introduce the YouTube Content ID API https://developers.google.com/youtube/partner/.
A few words in advance: You have to activate the API in your developer console, like any other API you want to use for your app. BUT:
Note: The YouTube Content ID API is intended for use by YouTube content partners and is not accessible to all developers or to all YouTube users. If you do not see the YouTube Content ID API as one of the services listed in the Google Developers Console, see www.youtube.com/partner to learn more about the YouTube Partner Program.
You don't see it in the list auf available APIs, unless your account is connected to a CMS and some time has past... It takes 7-14 days unless the Content ID API is available for your account. This is a information i got from the support, but they told me, that it is an automated step.
So, now lets assume, that you already have access to the Content ID API.
You can fetch a list of contentOwnerShips that belong to an account. You can use the API explorer https://developers.google.com/youtube/partner/docs/v1/contentOwners/list#try-it just use as param fetchMine=true and authorize with the https://www.googleapis.com/auth/youtubepartner-content-owner-readonly scope. The response looks like this:
{
"kind": "youtubePartner#contentOwnerList",
"items": [
{
"kind": "youtubePartner#contentOwner",
"id": "[CMS_ID]",
"displayName": "[DisplayName]",
"primaryNotificationEmails": [
"mail#random.xx"
],
"conflictNotificationEmail": "mail#random.xx",
"disputeNotificationEmails": [
"mail#random.xx"
],
"fingerprintReportNotificationEmails": [
"mail#random.xx"
]
}
]
}
This is where you get your CMS_ID from, you can also use it for any API Request as onBehalfOfContentOwner.
To get a list of all channels that belong to the ownership, simply make this request
"https://www.googleapis.com/youtube/v3/channels?part=contentDetails&managedByMe=true&onBehalfOfContentOwner=[CONTENTOWNER]&access_token=[ACCESS_TOKEN]"
But this request requires the granted "https://www.googleapis.com/auth/youtubepartner" scope.
Hoe this could help you, feel free to ask further questions.

getting random 404 errors using Valence

When I make API calls to the server, I'm getting 404 errors for various data -- grades, role IDs, terms -- that I won't get on the next time I call it. The data's there on the server, viewable by the same user, and is often returned successfully, but not every time. The same user context will return data successfully for other calls.
Any ideas what could be causing this?
I'm using the Valence API with the Python client library and our 9.4.1 SP18 instance of Desire2Learn in a non-interactive script.
more detail: the text it returns on the bad 404s is " ErrorThe system cannot find the path specified."
It would help enormously to gather data about your case: packet traces that can show successful calls from your client alongside unsuccessful calls, in particular, would be very useful to see. If you are quite certain (and I see no reason you shouldn't be from your description) that you're forming the calls in the right way each time you make them, then the kind of behaviour you're noticing would seem to speak to some wider network or configuration issue: sometimes your calls are properly getting through the web service layer, and sometimes they are not -- this would seem therefore not to be down to the way you're using the API but in the way the service is able to receive that request.
I would encourage you, especially if you can gather data to provide showing this behaviour, to open a support incident with Desire2Learn's help desk in conjunction with your Approved Support Contact, or your Partner Manager (depending on whether you're a D2L client or a D2L partner).

Resources