Rate limit exceeded in tweepy - twitter

I faced with a limitation problem using tweepy. I am recieving Rate limit exceeded error every time running script. I need to know is there any way to know how many requests may I do before Rate limit exceeded error occured.

Tweepy offers access to the Rate Limit API.
From their documentation
import tweepy
consumer_key = 'a'
consumer_secret = 'b'
access_token = 'c'
access_token_secret = 'd'
# OAuth process, using the keys and tokens
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# Creation of the actual interface, using authentication
api = tweepy.API(auth)
# Show the rate Limits
print api.rate_limit_status()
You'll then see a list of all the available rate limits and how many calls you have remaining.
For example:
{ "rate_limit_context" : { "access_token" : "1234" },
"resources" : { "account" : { "/account/login_verification_enrollment" : { "limit" : 15,
"remaining" : 15,
"reset" : 1411295469
},
"/account/settings" : { "limit" : 15,
"remaining" : 15,
"reset" : 1411295469
},
"/account/update_profile" : { "limit" : 15,
"remaining" : 15,
"reset" : 1411295469
},
"/account/verify_credentials" : { "limit" : 15,
"remaining" : 15,
"reset" : 1411295469
}

The rate limits can be found in the Twitter API documentation:
https://dev.twitter.com/docs/rate-limiting/1#rest

Related

Error message : Google.Apis.Requests.RequestError Range (Sheet1!K1254) exceeds grid limits. Max rows: 804, max columns: 25 [400]

Started getting below error randomly. while same code works fine if I stop my service and sun again. but after 1 days or two its start giving this error. I have scheduled a job to run after every 15 minutes where based on some condition it update the K column value.
A quick help is highly appreciated.
Error message : Google.Apis.Requests.RequestError Range (Sheet1!K1254) exceeds grid limits. Max rows: 804, max columns: 25 [400] Errors [ Message[Range (Sheet1!K1254) exceeds grid limits. Max rows: 804, max columns: 25] Location[ - ] Reason[badRequest] Domain[global] ]
Edit1: code which I am using to update the code:
private static async Task UpdateValue(SpreadsheetsResource.ValuesResource valuesResource,string updatedvalue, string emailaddr)
{
var valueRange = new ValueRange { Values = new List<IList<object>> { new List<object> { updatedvalue } } };
var update = valuesResource.Update(valueRange, SpreadsheetId, WriteRange);
update.ValueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.RAW;
var response = await update.ExecuteAsync();
Console.WriteLine($"Updated Licenses Status for : { response.UpdatedRows} " + emailaddr );
}

Corda - Redeem Tokens - Token SDK + Accounts

I'm trying to use the token SDK for the first time and I am not being able to redeem the correct amount of tokens from a specific Corda account.
How I am issuing the tokens:
val accountInfo = accountService.accountInfo(accountId)
val accountKey = subFlow(RequestKeyForAccountFlow(accountInfo.state.data,initiateFlow(ourIdentity)))
val tokens = 10 of MyTokenType issuedBy ourIdentity heldBy accountKey
subFlow(IssueTokens(listOf(tokens), emptyList()))
I also implemented the function report() to Query the Vault to GET the amount of tokens for each account:
fun report (accountInfo: StateAndRef<AccountInfo>) : Amount<TokenType> {
val criteria = QueryCriteria.VaultQueryCriteria(
status = Vault.StateStatus.UNCONSUMED,
relevancyStatus = Vault.RelevancyStatus.RELEVANT,
externalIds = listOf (accountInfo.state.data.identifier.id)
)
val exprAggregate=
builder {
com.r3.corda.lib.tokens.contracts.internal.schemas.PersistentFungibleToken::amount.sum()
}
val aggregateCriteria =
QueryCriteria.VaultCustomQueryCriteria(exprAggregate)
val otherResult = serviceHub.vaultService.queryBy(
criteria = criteria.and(aggregateCriteria),
contractStateType = FungibleToken::class.java).otherResults[0]
val sum= if (otherResult == null) 0 else (otherResult as Long)
return sum.MyToken
If I issue 10 tokens for a specific account I receive this on the GET that calls my report function:
{
"value": {
"quantity": 10,
"displayTokenSize": 1,
"token": {
"tokenIdentifier": "MyToken",
"fractionDigits": 0,
"displayTokenSize": 1,
"customTokenType": false,
"regularTokenType": true,
"tokenClass": "com.r3.corda.lib.tokens.contracts.types.TokenType",
"pointer": false
}
},
"message": "Tokens in account."
}
This shows me that I am actually Issuing the tokens right, since the quantity of tokens is the 10 tokens that I just issued.
What I am doing in the Redeem flow:
val accountInfo = accountService.accountInfo(accountId)
val tokens: Amount<TokenType> = 10.MyToken
val heldByAccount: QueryCriteria = QueryCriteria.VaultQueryCriteria()
.withExternalIds(Collections.singletonList(accountInfo.state.data.identifier.id))
subFlow(RedeemFungibleTokens(amount = tokens, issuer = ourIdentity, observers = emptyList(), queryCriteria = heldByAccount))
But When I do the GET to run my report() functin it just gives me this as a response:
{
"value": {
"quantity": 0,
"displayTokenSize": 1,
"token": {
"tokenIdentifier": "MyToken",
"fractionDigits": 0,
"displayTokenSize": 1,
"customTokenType": false,
"regularTokenType": true,
"tokenClass": "com.r3.corda.lib.tokens.contracts.types.TokenType",
"pointer": false
}
},
"message": "Tokens in account."
}
This shows me that my Redeem flow it's not working because it is not only redeeming 10 tokens but all the tokens for that account, since the quantity is equals to zero.
Any ideas on how I can fix this?
Thanks a lot
I believe the answer is add a changeHolder to the RedeemFungibleTokens sub-flow you are calling.
Take a look at:
https://training.corda.net/libraries/accounts-exercise/

parsing JSON file using telegraf input plugin : unexpected Output

I’m new to telegraf and influxdb, and currently looking forward to exploring telegraf, but unfortunetly, I have some difficulty getting started, I will try to explain my problem below:
Objectif: parsing JSON file using telegraf input plugin.
Input : https://wetransfer.com/downloads/0abf7c609d000a7c9300dc20ee0f565120200624164841/ab22bf ( JSON file used )
The input json file is a repetition of the same structure that starts from params and ends at it.
you find below the main part of the input file :
{
"events":[
{
"params":[
{
"name":"element_type",
"value":"Home_Menu"
},
{
"name":"element_id",
"value":""
},
{
"name":"uuid",
"value":"981CD435-E6BC-01E6-4FDC-B57B5CFA9824"
},
{
"name":"section_label",
"value":"HOME"
},
{
"name":"element_label",
"value":""
}
],
"libVersion":"4.2.5",
"context":{
"locale":"ro-RO",
"country":"RO",
"application_name":"spresso",
"application_version":"2.1.8",
"model":"iPhone11,8",
"os_version":"13.5",
"platform":"iOS",
"application_lang_market":"ro_RO",
"platform_width":"320",
"device_type":"mobile",
"platform_height":"480"
},
"date":"2020-05-31T09:38:55.087+03:00",
"ssid":"spresso",
"type":"MOBILEPAGELOAD",
"user":{
"anonymousid":"6BC6DC89-EEDA-4EB6-B6AD-A213A65941AF",
"userid":"2398839"
},
"reception_date":"2020-06-01T03:02:49.613Z",
"event_version":"v1"
}
Issue : Following the documentation, I tried to define a simple telegraf.conf file as below:
[[outputs.influxdb_v2]]
…
[[inputs.file]]
files = ["/home/mouhcine/json/file.json"]
json_name_key = "My_json"
#... Listing all the string fields in the json.(I put only these for simplicity reason).
json_string_fields = ["ssid","type","userid","name","value","country","model"]
data_format = "json"
json_query= "events"
Basically declaring string fields in the telegraf.conf file would do it, but I couldn’t get all the fields that are subset in the json file, like for example what’s inside ( params or context ).
So finally, I get to parse fields with the same level of hierarchy as ssid, type, libVersion, but not the ones inside ( params, context, user).
Output : Screen2 ( attachment ).
OUTPUT
By curiosity, I tried to test the documentation’s example, in order to verify whether I get the same expected result, and the answer is no :/, I don’t get to parse the string field in the subset of the file.
The doc’s example below:
Input :
{
"a": 5,
"b": {
"c": 6,
"my_field": "description"
},
"my_tag_1": "foo",
"name": "my_json"
}
telegraf.conf
[[outputs.influxdb_v2]]
…
[[inputs.file]]
files = ["/home/mouhcine/json/influx.json"]
json_name_key = "name"
tag_keys = ["my_tag_1"]
json_string_fields = ["my_field"]
data_format = "json"
Expected Output : my_json,my_tag_1=foo a=5,b_c=6,my_field="description"
The Result I get : "my_field" is missing.
Output: Screen 1 ( attachement ).
OUTPUT
By the way, I use the influxdb cloud 2, and I apologize for the long description of this little problem, I would appreciate some help please :), Thank you so much in advance.

Hide column with Sheets API call

I would like to hide a given column in a google spreadsheet through the API v4, but I struggle to do so.
Does anyone know if it's possible and has managed to do it?
We have in the Apps Script a dedicated method to do so, and I would be surprised if this feature is not available in the REST API.
Yes, there is. It's just not very straightforward.
Here is an example of hiding column 5:
import httplib2
from apiclient.discovery import build
credentials = get_credentials() ## See Google's Sheets Docs
http = credentials.authorize(httplib2.Http())
service = build('sheets', 'v4', http=http)
spreadsheet_id = '#####################'
sheet_id = '#######'
requests = []
requests.append({
'updateDimensionProperties': {
"range": {
"sheetId": sheet_id,
"dimension": 'COLUMNS',
"startIndex": 4,
"endIndex": 5,
},
"properties": {
"hiddenByUser": True,
},
"fields": 'hiddenByUser',
}})
body = {'requests': requests}
response = service.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id,
body=body
).execute()

Docker API: cpu_stats vs percpu_stats

What is the difference between
cpu_stats and percpu_stats when using Docker remote API:
The request is :
GET /containers/(id or name)/stats
(A part of) The response is:
"cpu_stats" : {
"cpu_usage" : {
"percpu_usage" : [
8646879,
24472255,
36438778,
30657443
],
"usage_in_usermode" : 50000000,
"total_usage" : 100215355,
"usage_in_kernelmode" : 30000000
},
"system_cpu_usage" : 739306590000000,
"throttling_data" : {"periods":0,"throttled_periods":0,"throttled_time":0}
},
"precpu_stats" : {
"cpu_usage" : {
"percpu_usage" : [
8646879,
24350896,
36438778,
30657443
],
"usage_in_usermode" : 50000000,
"total_usage" : 100093996,
"usage_in_kernelmode" : 30000000
},
"system_cpu_usage" : 9492140000000,
"throttling_data" : {"periods":0,"throttled_periods":0,"throttled_time":0}
}
Example taken from Docker docs:
https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#get-container-stats-based-on-resource-usage
When testing with a sample container, values are almost the same.
Example of an output:
#Cpu stas
{u'cpu_usage': {u'usage_in_usermode': 0, u'total_usage': 36569630, u'percpu_usage': [8618616, 3086454, 16466404, 8398156], u'usage_in_kernelmode': 20000000}, u'system_cpu_usage': 339324470000000, u'throttling_data': {u'throttled_time': 0, u'periods': 0, u'throttled_periods': 0}}
#Per cup stats
{u'cpu_usage': {u'usage_in_usermode': 0, u'total_usage': 36569630, u'percpu_usage': [8618616, 3086454, 16466404, 8398156], u'usage_in_kernelmode': 20000000}, u'system_cpu_usage': 339320550000000, u'throttling_data': {u'throttled_time': 0, u'periods': 0, u'throttled_periods': 0}}
I tried also to compare specific metrics in the two case for 4 containers:
#First container
359727340000000 #CPU Stats
359723390000000 #Per CPU Stats
#2
359735220000000
359731290000000
#3
359743100000000
359739170000000
#4
359750940000000
359747000000000
The values above are almost same (some differences but not huge - may be because there are some ms between each request.)
In the official documentation:
The precpu_stats is the cpu statistic of last read, which is used for
calculating the cpu usage percent. It is not the exact copy of the
“cpu_stats” field.
Not very clear for me.
Anyone could explain better ?

Resources