Search Tweets from Twitter API returns no result - twitter

First time trying to get Twitter API to work. I followed these instructions: https://developer.twitter.com/en/docs/twitter-api/tweets/search/quick-start/recent-search
I got my API Key and secret and bearer token.
In postman I tried: https://api.twitter.com/2/tweets/search/recent?query=from:taylorswift13&tweet.fields=created_at&expansions=author_id&user.fields=description
But it returned nothing(I tried other username as well and less params), in authorization I picked bearer token and pasted it so I should be okay there(I tried entering a wrong token and there it told me authentication error so I believe the authentication part is fine). I tried to do the API key one and it also didn't work. Any help is greatly appreciated!

You needs to remove the from: in query value.
The token got it correctly.
From
from:taylorswift13
To
taylorswift13
So The recent search URL is
https://api.twitter.com/2/tweets/search/recent?tweet.fields=created_at&expansions=author_id&user.fields=description&query=taylorswift13
Get Access Token
https://api.twitter.com/oauth2/token?grant_type=client_credentials
This API can get the user name by user ID
https://api.twitter.com/2/users/{user-id}
This code will help to get all of tweets for specific user by name
import tweepy
bearer_token ="your token"
client = tweepy.Client(bearer_token=bearer_token)
user_name="taylorswift13"
user = client.get_user(username=user_name)
user_id = user.data.id
tweets = client.get_users_tweets(id=user_id)
count = 0
for tweet in tweets.data:
print(tweet.id, " -> ",tweet.text)
count += 1
print('---------------------------------------------------------', count)
next_token = tweets.meta['next_token']
while next_token != None:
tweets = client.get_users_tweets(id=user_id, pagination_token=next_token)
if tweets.data is not None:
for tweet in tweets.data:
print(tweet.id, " -> ",tweet.text)
count += 1
print('---------------------------------------------------------', count)
if (tweets.meta['next_token']) is not None:
next_token = tweets.meta['next_token']
else:
next_token = None
else:
next_token = None
It has a minor error but most tweet will display
(total 570 tweets)
Error
if (tweets.meta and tweets.meta['next_token']) is not None:
KeyError: 'next_toke
Result
... removed first lines
1140602666890014720 -> A happy meal 🍔 🍟 💗 https://xxx/hPAbOZEsKF
--------------------------------------------------------- 545
1140599648182243328 -> Stan/Follow/Support! #A_doubleC_D #adamlambert #Adaripp #AdoreDelano #antoni #theebillyporter #bobbyberk #chesterlockhart #ciara #DeltaWork #DexStar84 #TheEllenShow #harto #HayleyKiyoko #QueenJadeJolie #jessetyler #JustinMikita #jvn #Karamo #katyperry #Lavernecox #YNTCDmusicvideo
--------------------------------------------------------- 546
1140594511325908992 -> The #YNTCDmusicvideo is out! First, I want to say that my co-stars in this video are AMAZING. Please celebrate by supporting their work, following them, and going to see them perform. I’m SO grateful and SO EXCITED I ACTUALLY DO NEED TO CALM DOWN.🍹 https://xxx/787ksrnpBY https://xxx/Oj7GtQ7fxa
--------------------------------------------------------- 547
1140579059769978880 -> The #YNTCDmusicvideo premieres in 1 hour 💞
Premiere watch page: https://xxx/gqUZ9FnKU1
#YouTube #youtubemusic https://xxx/Ek972t3hkE
--------------------------------------------------------- 548
1140424618202882053 -> RT #TheEllenShow: My friend #TaylorSwift13 asked me to be in her new music video, “You Need To Calm Down.” It premieres tomorrow! Now I nee…
--------------------------------------------------------- 549
1140273222274945025 -> Asked a few friends to be in the You Need To Calm Down video 😄 Out tomorrow at 8:15am ET https://xxx/QFp2Ni4Lb0
--------------------------------------------------------- 550
1140025869706104840 -> Tea time! Monday morning! 💕 https://xxx/YLt8UD9dNC
--------------------------------------------------------- 551
1139939719113060352 -> We all got crowns 👑 💕 https://xxx/vI2rXQQMTl
--------------------------------------------------------- 552
1139749718362398720 -> Can you just not step on our gowns? 💃 https://xxx/A8QPkZicqS
--------------------------------------------------------- 553
1139592994401771520 -> RT #Target: We promise that you'll never find another like this! #PreorderLover #taylorswift13's new album - 4 exclusive deluxe versions. O…
--------------------------------------------------------- 554
1139585079418785792 -> Alexa, play Today in Music 💗
#amazonmusic https://xxx/XNpxSYrPpG
--------------------------------------------------------- 555
1139562838891139079 -> A delicious new video comes out Monday morning...💗🎂💗 https://xxx/fnZMz6P5dg
--------------------------------------------------------- 556
1139384677813219330 -> Gxgjxkhdkdkydkhdkhfjvjfj
https://xxx/XjL0jBb4i0 https://xxx/8J6Bc89NQx
--------------------------------------------------------- 557
1139365833011015680 -> There were five posts in the fence. https://xxx/dHgwKbd1Q5
--------------------------------------------------------- 558
1139282660860334107 -> Lover, album out August 23. Cover shot by the artistic genius that is #valheria123 💗 Pre-add, pre-save, pre-order (all the pre stuff you feel like doing) Can’t wait for you to hear this. https://xxx/SGjcCUYZdM https://xxx/IPy54raQUF
--------------------------------------------------------- 559
1138895438101323777 -> Going live tomorrow on Instagram at 5pm ET 😁 https://xxx/l4tnhj84tG
--------------------------------------------------------- 560
1135606264883482624 -> RT #TheEllenShow: .#TaylorSwift13, you make me so proud. #PrideMonth https://xxx/nbVZoKOzub
--------------------------------------------------------- 561
1135606045378830337 -> RT #onemuslimgal: Because you can want who you want. Boys and Boys. Girls and Girls. #letterToMysenator #taylornation13 #taylorswift13 http…
--------------------------------------------------------- 562
1135605849416753152 -> RT #PetermanAudrey: Today I wrote to my senator, Ted Cruz, for the first time. Thanks to the awe inspiring words of Taylor Swift and the ex…
--------------------------------------------------------- 563
1135604986111508482 -> RT #heartsandhalo: “The most common way people give up their power is by thinking they don’t have any.” 🌈 #lettertomysenator https://xxx/J…
--------------------------------------------------------- 564
1135604520338231296 -> RT #HayleyGeronimo: It’s way past my bedtime, but I was so moved by Taylor’s letter to her senator that I had to pick up my own pen too!!🖊📃…
--------------------------------------------------------- 565
1135601687274541057 -> 📷: #ashleyophoto, #jeffkravitz // Getty Entertainment https://xxx/mijbu6cV5E
--------------------------------------------------------- 566
1135217534825881600 -> 🌈Like a rainbow with all of the colors🌈
Thank you to everyone who came to Wango Tango! That was FUN 🥳 Ps a huge thank you to #brendonurie for surprising the crowd!!
📸: Rich Fury, Kevin Mazur, Jeff Kravitz, Wes and Alex // Getty Entertainment https://xxx/dXWewgIUPv
--------------------------------------------------------- 567
1134673128301453312 -> #lettertomysenator
https://xxx/EKYMXZw5U9 https://xxx/Ym0mGeOHgc
--------------------------------------------------------- 568
1134524686388404225 -> I love seeing what you’re listening to on your own playlists that you’ve created on #AppleMusic. Keep sharing them with me with the hashtag #PlaylistbyME https://xxx/ivvkvt6V9g https://xxx/9XfMvCNxia
--------------------------------------------------------- 569
1133936366977540096 -> 😻 https://xxx/Lyr0vtXO69
--------------------------------------------------------- 570

Related

How to find number of rows with data count more than 3 in Google Sheets?

Imagine we have a dataset in Google Sheets representing a grading book. Columns E, G, I, K, and M represent the score one has achieved for questions 1 to 5, and rows 5 to 64 are the student names. I want to see how many of the students have solved at least 3 questions; Here, by solving I mean that the student has gotten a full mark on that question (also the grade distribution can vary; for example, question 1 has 10 points while the other have 25 points).
Note that one thing that popped into my mind was to create a new column and store the number of solved questions for each student there (and then iterate over them and see how many of them are >= 3); Is there a way to satisfy the problem without creating or using new row/columns?
I didn't find anything proper that had to deal with rows and also keeping track of the cell count in those rows. One approach is to use Inclusion–exclusion principle with this link here. It'd basically be something like
COUNTIFS(E5:E64,E4,G5:G64,G4,I5:I64,I4) + COUNTIFS(E5:E64,E4,G5:G64,G4,K5:K64,K4) + COUNTIFS(E5:E64,E4,G5:G64,G4,M5:M64,M4) + COUNTIFS(E5:E64,E4,I5:I64,I4,K5:K64,K4) + COUNTIFS(E5:E64,E4,I5:I64,I4,M5:M64,M4) + COUNTIFS(E5:E64,E4,K5:K64,K4,M5:M64,M4) + COUNTIFS(G5:G64,G4,I5:I64,I4,K5:K64,K4) + COUNTIFS(G5:G64,G4,I5:I64,I4,M5:M64,M4) + COUNTIFS(G5:G64,G4,K5:K64,K4,M5:M64,M4) + COUNTIFS(I5:I64,I4,K5:K64,K4,M5:M64,M4) - (COUNTIFS(E5:E64,E4,G5:G64,G4,I5:I64,I4,K5:K64,K4) + COUNTIFS(E5:E64,E4,G5:G64,G4,I5:I64,I4,M5:M64,M4) + COUNTIFS(E5:E64,E4,G5:G64,G4,K5:K64,K4,M5:M64,M4) + COUNTIFS(E5:E64,E4,I5:I64,I4,K5:K64,K4,M5:M64,M4) + COUNTIFS(G5:G64,G4,I5:I64,I4,K5:K64,K4,M5:M64,M4) - COUNTIFS(E5:E64,E4,G5:G64,G4,I5:I64,I4,K5:K64,K4,M5:M64,M4))
this link was the closest I got.
I think using matrices and multiplying them could be the solution. However, I'm not very good at that!
I'd appreciate any help.
Thanks in advance.
Update: here is a table to better understand this problem. The formula should return 2 (w and z both are satisfiable).
Student Name
Question 1
Question 2
Question 3
Question 4
Question 5
Mr. x
10
14
17
8
25
Mr. y
8
25
25
14
19
Mr. w
10
25
17
8
25
Mr. z
10
14
25
25
25
This should cover it:
=SUMPRODUCT(--(((E5:E64=E4)+(G5:G64=G4)+(I5:I64=I4)+(K5:K64=K4)+(M5:M64=M4))>=3))

Select every hour query

I have a simple weather station DB with example content:
time humi1 humi2 light pressure station-id temp1 temp2
---- ----- ----- ----- -------- ---------- ----- -----
1530635257289147315 66 66 1834 1006 bee1 18.6 18.6
1530635317385229860 66 66 1832 1006 bee1 18.6 18.6
1530635377466534866 66 66 1829 1006 bee1 18.6 18.6
Station writes data every minute. I want to get SELECT not with all series, but just series written every hour (or every 60th series, simply said). How can I achieve it?
I tried to experiment with ...WHERE time % 60 = 0, but it didn`t work. It seems, that time column doesnt permit any math operations (/, %, etc).
Group by along with a one of the functions can do what you want:
SELECT FIRST("humi1"), FIRST("humi2"), ... GROUP BY time(1h)
I would imagine for most climate data you'd want the MEAN or MEDIAN rather than a single data point every hour
basic example, and more complex example

Where is CAN ID in CAN Message Frame

I am new to the CAN-BUS protocol. So was going through the CAN Bus Specifications and related documents.
I have always used the CAN ID and Frame at the application level.
CANID like 0x1a1 CAN Frame like ff 22 ff 33 co 33 ee 44 (8 bytes).
In the specification, they mentioned that the Frame consists of identifier field.
I doubt what is that.
Is that the CAN ID like 0x1a1 or CAN ID + some other stuff.
No documents mentioned that clearly.
If that is not CAN ID.. Where is CAN-ID in CAN Format.
Can anyone clarify that doubt?
In short, a single CAN message consists of CANID, the CAN data and other stuffs,
https://en.wikipedia.org/wiki/CAN_bus#Data_frame

Estimating mortality with acmeR package

There is a relatively new package that has come out called acmeR for producing estimates of mortality (for birds and bats), and it takes into consideration things like bleedthrough (was the carcass still there but undetected, and then found in a later search?), diminishing searcher efficiency, etc. This is extremely useful, except I can't seem to get it to work, despite it seeming to be pretty straightforward. The data structure should be like:
Date, in US format mm/dd/yyyy or ISO 8601 format yyyy-mm-dd
Time, in am/pm US format HH:MM:SS AM or 24-hr ISO format HH:MM:SS
ID, arbitrary distinct alpha strings unique to each carcas
Species, arbitrary distinct alpha strings (e.g. AOU, ABMP, IBP)
Event, “Place”, “Check”, or “Search” (only 1st letter counts)
Found, TRUE or FALSE (only 1st letter counts)
and look something like this:
“Date”,“Time”,“ID”,“Species”,“Event”,“Found”
“1/7/2011”,“08:00:00 PM”,“T091”,“UNBA”,“Place”,TRUE
“1/8/2011”,“12:00:00 PM”,“T091”,“UNBA”,“Check”,TRUE
“1/8/2011”,“16:00:00 PM”,“T091”,“UNBA”,“Search”,FALSE
My data look like this:
Date: Date, format: "2017-11-09" "2017-11-09" "2017-11-09" ...
Time: Factor w/ 644 levels "1:00 PM","1:01 PM",..: 467 491 518 89 164 176 232 39 53 247 ...
Species: Factor w/ 52 levels "AMCR","AMKE",..: 31 27 39 27 39 31 39 45 27 24 ...
ID: Factor w/ 199 levels "GHBT001","GHBT002",..: 1 3 2 3 2 1 2 7 3 5 ...
Event: Factor w/ 3 levels "Check","Place",..: 2 2 2 3 3 3 1 2 1 2 ...
Found: logi TRUE TRUE TRUE FALSE TRUE TRUE ...
I have played with the date, time, event, etc formats too, trying multiple formats because I have had some issues there. So here are some of the errors I have encountered, depending on what subset of data I use:
Error in optim(p0, f, rd = rd, method = "BFGS", hessian = TRUE) :non-finite value supplied by optim In addition: Warning message: In log(c(a0, b0, t0)) : NaNs produced
Error in read.data(fname, spec = spec, blind = blind) : Expecting date format YYYY-MM-DD (ISO) or MM/DD/YYYY (USA) USA
Error in solve.default(rv$hessian): system is computationally singular: reciprocal condition number = 1.57221e-20
Warning message: # In sqrt(diag(Sig)[2]) : NaNs produced
Error in solve.default(rv$hessian) : Lapack routine dgesv: system is exactly singular: U[2,2] = 0
The last error is most common (and note, my data are non-numeric, sooooo... I am not sure what math is being done behind the scenes to come up with these equations, then fail in the solve), but the others are persistent too. Sometimes, despite the formatting being exactly the same in one dataset, a subset of that data will return an error when the parent dataset does not (does not have anything to do with species being there/not being there in one or the other dataset, as far as I can tell).
I cannot find any bug reports or issues with the acmeR package out there - so maybe it runs perfectly and my data are the problem, but after three ecologists have vetted the data and pronounced it good, I am at a dead end.
Here is a subset of the data, so you can see what they look like:
Date Time Species ID Event Found
8 2017-11-09 1:39 PM VATH GHBT007 P T
11 2017-11-09 2:26 PM CORA GHBT004 P T
12 2017-11-09 2:30 PM EUST GHBT006 P T
14 2017-11-09 6:43 AM CORA GHBT004 S T
18 2017-11-09 8:30 AM EUST GHBT006 S T
19 2017-11-09 9:40 AM CORA GHBT004 C T
20 2017-11-09 10:38 AM EUST GHBT006 C T
22 2017-11-09 11:27 AM VATH GHBT007 S F
32 2017-11-09 10:19 AM EUST GHBT006 C F

Is col X functionally dependent of col y?

I am trying to understand database normalisation. I saw this example of 2 Normal form which is not 3 normal forms
Tournament Year Winner Winner_Date_of_Birth
Indiana Invitational 1998 Al Fredrickson 21 July 1975
Cleveland Open 1999 Bob Albertson 28 September 1968
Des Moines Masters 1999 Al Fredrickson 21 July 1975
Indiana Invitational 1999 Chip Masterson 14 March 1977
Here the primary key is Tournament, Year. So no non primary key attribute is Functionally dependent on subset of primary, it is in 2NF.
How, acc to wikipedia, it is not in 3 NF because
Touranment, Year -> Winner and
Winner -> Winner_Date_Of_Birth
So there is a transitive property of Functional Dependency among keys. I understand this part, but what I would like to know is that, Since for our key
(Tournament,Year) there can only be one unique winner_date_of_birth, is it right to say that ( Touranment, Year ) -> Winner_Date_Of_Birth without using the transitive property above?
Yes, transitive means that you can derive A -> C from A -> B and B -> C.

Resources