Twitter stream APIs, allotted requests for search stream APIs v2 - twitter

I'm new to Twitter APIs (this is my first experience), and I'm playing with them to monitor an account for new tweets, opening a web page when it happens, but I'm having some doubt on understanding how the allotting works.
Not knowing much, the twitter stream v2 APIs seem the ones fitting my use case, and in the Twitter-API-v2-sample-code git repository there is also a very clear filtered stream nodejs example. In fairness, I had little hassle to implement everything, and my code is not much different from filtered_stream.js source code. Given the provided example, implementation is straightforward: I use https://api.twitter.com/2/tweets/search/stream/rules to setup my rules (an array like [ { 'value': 'from:<myAccount>' } ] and then I start to listen at https://api.twitter.com/2/tweets/search/stream, easy peasy.
What I don't understand is the allotting resources count, because as per Twitter documentation I should be able to make 50 requests every 15 minutes, but I can barely make a couple, thus every time I'm testing my script I have to wait a couple of minutes before restarting.
These are the relevant headers I received after restarting a script running since one hour (the status code at restart was 429):
x-rate-limit-limit: 50
x-rate-limit-remaining: 49
Reset time: +15 minutes from current time
I usually don't have to wait 15 minutes, just a couple usually is fine... And my other note is that i managed to arrive down to 45 x-rate-limit-remaining once or twice, but never lower than that (usually I'm locked out at 47 / 48).
What I don't understand is: I opened one stream, I closed that one stream, and now I'm locked out for a couple of minutes. Allegedly, shouldn't I be able to open up to 50 connection in 15 minutes? (which is actually plenty if I'm just debugging a portion of code). Even the headers says that I have 49 attempts remaining out of 50, the status code 429 seems in pure contradiction with the x-rate-limits ... Sometimes, I cannot even reset the rules and start the stream in the same run, because the stream will return a backoff (status 429) when the rules resetting finishes (get -> set -> delete)...
I could add my code, but it's literally the NodeJS example I cited above, and my problem is not about querying the APIs, but rather not being able to connect for no apparent reason (at least to me). The only thing I could think of is the fact that I use the same Bearer for all requests (as per their example), but I don't see written anywhere it is a problem (I generated it in the developer dashboard, I'm not sure there is an API for that as well).
Edit - adding details
Just to describe my issue, this is the output I get when I start the script the first time:
Headers for stream received (status 200):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 49
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
Which make sense, I made one request, remaining count went down by one.
Now, I stopped it, and ran it immediately after (Ctrl + C, run again, let's say two seconds delay), and this is the new output:
Headers for stream received (status 429):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 48
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
With the following exception being returned in the body:
{
title: 'ConnectionException',
detail: 'This stream is currently at the maximum allowed connection limit.',
connection_issue: 'TooManyConnections',
type: 'https://api.twitter.com/2/problems/streaming-connection'
}
I understand the server takes a bit to realise I disconnected, but don't I have 50 connections available in a 15 minutes timeframe? I only opened one connection.
Actually, After the time it took to write all of the above (let's say ten minutes), I was able to connect again, receveing with this output:
Headers for stream received (status 200):
- [x-rate-limit-limit]: 50
- [x-rate-limit-remaining]: 47
- [x-rate-limit-reset]: 20/03/2021, 11:05:35
Maybe I'm realising only now and I wrote a useless question, but can I only have one active connection, being able to close it and open it again 50 times in 15 minutes? I understood I could have 50 active connections, but maybe at this point I'm wrong (and the Twitter server indeed takes a few minutes to realise I disconnected).

Related

how to check whether appium driver is live

I have a scenario where after I disable a button, I check for the data persistence in the database. It takes some time to persist data in the database( roughly 3 mins). My tests are started through sauce labs so after 90 seconds the time out and my session is closed.
I do take screenshots of the tests at the tearDown Method. when data persistence takes more than 90 seconds the screenshots method is failing. I want to take screenshots only when the driver is alive, how can I check for it?
takeAllureScreenShot();
}```
You can increase how long Sauce Labs waits before shutting down a session by configuring the idleTimeout desired capability (docs for which are here).
By default, this is set to 90 seconds; It sounds like you should increase it to something like 200 seconds.
Assuming you're using Java and a Selenium 3 session with vendor name-spacing, you could do that like so:
// This is a new capabilities object to hold the nested vendor-specific options
MutableCapabilities sauceOptions = new MutableCapabilities();
sauceOptions.setCapability("idleTimeout", 200);
// assuming your desired capabilities are called 'capabilities'
capabilities.setCapability("sauce:options", sauceOptions);
(If you just wanted to check that the session was still alive, you could do so by doing something "trivial" like checking the page title inside an try/catch block. If an exception is thrown, the session is over! If you get a response, it's not).

Serverless SQS consumer skips messages

I am using the Serverless Framework to consume messages from SQS. Some of the messages sent to the queue do not get consumed. They go straight to the in-flight SQS status and from there to my dead letter queue. When I look at my log of the consumer, I can see that it consumed and successfully processed 9/10 messages. One is always not consumed and ends up in the dead letter queue. I am setting reservedConcurrency to 1 so that only one consumer can run at a time. The function consumer timeout is set to 30 seconds. This is the consumer code:
module.exports.mySQSConsumer = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log(event.Records);
await new Promise((res, rej) => {
setTimeout(() => {
res();
}, 100);
});
console.log('DONE');
return true;
}
Consumer function configuration follow:
functions:
mySQSConsumer:
handler: handler.mySQSConsumer
timeout: 30 # seconds
reservedConcurrency: 1
events:
- sqs:
arn: arn:aws:sqs:us-east-1:xyz:my-test-queue
batchSize: 1
enabled: true
If I remove the await function, it will process all messages. If I increase the timeout to 200ms, even more messages will go to straight to the in-flight status and from there to the dead letter queue. This code is very simple. Any ideas why it's skipping some messages? The messages that don't get consumed don't even show up in the log using the first console.log() statement. They seem entirely ignored.
I figured out the problem. The SQS queue Lambda function event triggering works differently than I thought. The messages get pushed into the Lambda function, not pulled by it. I think this could be engineered better by AWS, but it's what it is.
The issue was the Default Visibility Timeout set to 30 seconds together with Reserved Concurrency set to 1. When the SQS queue gets filled up quickly with thousands of records, AWS starts pushing the messages to the Lambda function at a rate that is faster than the rate at which the single function instance can process them. AWS "assumes" that it can simply spin up more instances of the Lambda to keep up with the backpressure. However, the concurrency limit doesn't let it spin up more instances - the Lambda function is throttled. As a result, the function starts returning failure to the AWS backend for some messages, which will, consequently, hide the failed messages for 30 seconds (the default setting) and put them back into the queue after this period for reprocessing. Since there are so many records to process by the single instance, 30 seconds later, the Lambda function is still busy and can't process those messages again. So the situation repeats itself and the messages go back to invisibility for 30 seconds. This repeats total 3 times. After the third attempt, the messages go to the dead letter queue (we configured our SQS queue that way).
To resolve this issue, we increased the Default Visibility Timeout to 5 minutes. That's enough time for the Lambda function to process through most of the messages in the queue while the failed ones wait in invisibility. After 5 minutes, they get pushed back into the queue and since the Lambda function is no longer busy, it will process most of them. Some of them have to go to invisibility twice before being successfully processed.
So the remedy to this problem is either increasing the Default Invisibility Timeout like we did or increasing the number of failures necessary before a message goes to the dead letter queue.
I hope this helps someone.

Sim800l AT+COPS returns 0 and AT+CREG returns 0,3

Yeah I know there is similar questions in this community But they didn't help.
It's for some days that I play with SIM800l.It's response to my at commands is good but when I want to send SMS I'll get problem.I think this Screenshot says most of story:
AT commands and response
https://ibb.co/bXxwFQ
u can see that I have signal (AT+CSQ = 19).but my module can't find and connect to operator (forgot to test AT+CREG but it returns 0,3)
and I can set CREG to 1,3 by AT+CREG=1 command.Does it help?
oh at last I should say that I'm using lm2596 for supplying and my module blinks 70 times in a minute.more than 1 time in a sec (searching for network) and less than 2 time in a sec (connected)
ANY help would be appreciated
Maybe you're not powering it up with enough supply. Your module does not automatically register to network. It happened to me when I power my sim800l with the arduino 5v, it works smoothly at first but it keeps on resetting after a while. Try to use at commands such as cband, cops, creg, and csca to manually register to network.

Can a Google App Engine App get blocked from accessing the Google Docs API

I have implemented a Google App Engine application which uploads documents to specific folders in Google Docs. A month ago I started having response time issues (deadline exceeded on GdataClient.GetDocList, fetch-url call, in Gdata Client)when querying for a specific folder in Google Docs. This caused a lot of tasks to queue-up in the Task Queue.
When I saw this, I paused the queues for a while - about 24 hours. When I restarted the queue nearly all of the where uploaded again, except 10 of the the files / tasks.
When I implemented the GetDocList call, I implemented a retry / sleep functionality to avoid the sometimes intermittent "DeadLineExceeded" which I got during my .GetNextLink().href-loop. I know that this is not a good "Cloud" design. But I was forced to do this to get it stable enough for production. For every sleep I extend the wait time and I only retry 5 times. The last time I wait for about 25 sec before retrying.
What I think is that all the tasks in the queues retried so many times (even though I have limited the tasks to running in serial-mode , one at a time. Maximum 5 a minute) that the App Engine App where black-listed from the Google Docs Api.
Can this happen?
What do I need to do to be able to query Google Docs Api from the same App Engine instance again?
Do I need to migrate the App Engine app to a new Application ID?
When I try this from my development environment, the code works, it queries the folder structure and returns a result within the time-limit.
The folder-structure I'm querying is rather big, which means that I need to fetch them via the .GetNextLink().href. In my development environment, the folderstructure contains of much less folders.
Anyway, this have been working very good for about a year in the production AppEngine instance. But stopped working around the 4th - 5th of March.
The user-account which is queried is currently using 7000 MB (3%) of the available 205824 MB.
When I use the code from dev-env but with completely different Google Apps domain / app-id / google account I can not reproduce the error.
When I changed the max-results to 1 (instead of 100 or 50 or 20) I succeed intermittently. But as the max-result is 1 I need to query many 1000 times, and since I only succeed with max 3 in a row, until my exponential back-off quits I never get my whole resultset. The resultset (the folder I query consist of between 300 to 400 folders (which in turn consists of at least 2 - 6 subfolders with pdf-files in)
I have tried with max-result 2, then the fetch fails on every occasion. If I change back to max-result 1 , then it succeeds on one or two fetches in a row, but this is not suffient. Since I need the whole folder-structure to be able to find a the correct folder to store the file in.
I have tried this from my local environment - i.e. from a completly different IP-adress and it still fails. This means that the app-engine app is not blocked from accessing google docs. The max-result change from 2 to 1 also proves that.
Conclusion:
The slow return time from the Google Docs API must be due to the extensive amount of files and collections inside the collection which I'm looping through. Keep in mind that this collection contains about 3500 Mb. Is this an issue?
Log:
DocListUrl to get entries from = https://docs.google.com/feeds/default/private/full/folder:XXXXXXX/contents?max-results=1.
Retrying RetryGetDocList, wait for 1 seconds.
Retrying RetryGetDocList, wait for 1 seconds.
Retrying RetryGetDocList, wait for 4 seconds.
Retrying RetryGetDocList, wait for 9 seconds.
Retrying RetryGetDocList, wait for 16 seconds.
Retrying RetryGetDocList, wait for 25 seconds.
ApplicationError: 5
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in call
handler.post(*groups)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsHandler.py", line 418, in post
success = uploader.Upload(blob_reader, fileToUpload.uploadSize, fileToUpload.MainFolder, fileToUpload.ruleTypeReadableId ,fileToUpload.rootFolderId,fileToUpload.salesforceLink,fileToUpload.rootFolder, fileToUpload.type_folder_name, fileToUpload.file_name, currentUser, client, logObj)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsClasses.py", line 404, in Upload
collections = GetAllEntries('https://docs.google.com/feeds/default/private/full/%s/contents?max-results=1' % (ruleTypeFolderResourceId), client)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsClasses.py", line 351, in GetAllEntries
chunk = RetryGetDocList(client.GetDocList , chunk.GetNextLink().href)
File "/base/data/home/apps/XXX/prod-43.358023265943651014/DocsClasses.py", line 202, in RetryGetDocList
return functionCall(uri)
File "/base/data/home/apps/XXX/prod-43.358023265943651014/gdata/docs/client.py", line 142, in get_doclist
auth_token=auth_token, **kwargs)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/gdata/client.py", line 635, in get_feed
**kwargs)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/gdata/client.py", line 265, in request
uri=uri, auth_token=auth_token, http_request=http_request, **kwargs)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/atom/client.py", line 117, in request
return self.http_client.request(http_request)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/atom/http_core.py", line 420, in request
http_request.headers, http_request._body_parts)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/atom/http_core.py", line 497, in _http_request
return connection.getresponse()
File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 206, in getresponse
deadline=self.timeout)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 263, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 371, in _get_fetch_result
raise DeadlineExceededError(str(err))
DeadlineExceededError: ApplicationError: 5
Regards
/Jens
On occasion responses from the Google Documents List API exceed the deadline for App Engine HTTP requests. This can be the case with extremely large corpuses of documents being returned in the API.
To workaround this, set the max-results parameter to a smaller number than 1000.
Also, retry the request using exponential back-off.
To work around failing uploads, use the Task Queue in App Engine to complete uploads, as well as resumable upload with the API.
You can request the App Engine team increase the size of the HTTP timeout of your application to a large number of seconds that would allow this request to succeed. However, it is rare that the team approves such a request without a strong need.

QTP Recovery scenario used to "skip" consecutive FAILED steps with 0 timeout -- how can I restore original timeout value?

Suppose I use QTPs recovery scenario manager to set the playback synchronization timeout to 0. The handler would return with "continue with next statement".
I'd do that to make sure that any following playback statements don't waste their time waiting for the next non-existing/non-matching step before failing:
I have a lot of GUI tests that kind of get stuck because let's say if 10 controls are missing, their (consecutive) playback steps produce 10 timeout waits before failing. If the playback timeout is 30 seconds, I loose 10x30 seconds=5 minutes execution time while it really would be sufficient to wait for 30 seconds ONCE (because the app does not change anymore -- we waited a full timeout period already).
Now if I have 100 test cases (=action iterations), this possibly happens 100 times, wasting 500 minutes of my test exec time window.
That's why I come up with the idea of a recovery scenario function setting the timeout to 0 after/upon the first failed playback step. This would accelerate the speed while skipping the rightly-FAILED step, yet would not compromise the precision/reliability of identifying the next matching GUI context (which creates a PASSED step).
Then of course upon the next passed playback step, I would want to restore the original timeout value. How could I do that? This is my question.
One cannot define a recovery scenario function that is called for PASSED steps.
I am currently thinking about setting a method function for Reporter.ReportEvent, and "sniffing" for PASSED log entries there. I'd install that method function in the scenario recovery function which sets timeout to 0. Then, when the "sniffer" function senses a ReportEvent call with PASSED status during one of the following playback steps, I'd reset everything (i.e. restore the original timeout, and uninstall the method function). (I am 99% sure, however, that .Click and .Set methods do not call ReportEvent to write their result status...so this option might probably not work.)
Better ideas? This really bugs me.
It sounds to me like you tests aren't designed correctly, if you fail to find an object why do you continue?
One possible (non recovery scenario) solution would be to use RegisterUserFunc to override the methods you are using in order to do an obj.Exist(0) before running the required method.
Function MyClick(obj)
If obj.Exist(1) Then
obj.Click
Else
Reporter.ReportEvent micFail, "Click failed, no object", "Object does not exist"
End If
End Function
RegisterUserFunc "Link", "Click", "MyClick"
RegisterUserFunc "WebButton", "Click", "MyClick"
''# etc
If you have many controls of which some may be missing and you know that after 10 seconds you mentioned (when the first timeout occurs), nothing more will show up, then you can use the exists method with a timeout parameter.
Something like this:
timeout = 10
For Each control in controls
If control.exists(timeout) Then
do something with the control
Else
timeout = 0
End If
Next
Now only the first timeout will be 10 seconds. Each and every subsequent timeout in your collection of controls will have the timeout set to 0 which will save your time.

Resources