Auto-Reply Tweets in Twitter (bot) in simple way or tools? - twitter

is any possible & simple way to make a twitter bot that will reply to some tweets (depend on search terms) in certain time interval. can anyone help me.
for example twitter.com/shastribot
Thanks

If you like Ruby, then I suggest using the Twitter gem: https://github.com/jnunemaker/twitter
It makes things very easy.
You could then write a script that checks whether there are any replies to the bot and if there are any new ones sends out a message. Then set it up as a cron job running as often as you think is necessary.
There's also the Twitter Bot interface to Twitter, I haven't used it myself but might be worth a look: http://integrum.rubyforge.org/twitter_bot/

You should try tweebot. It's python micro framework for twitter bots. This lib provides built-in blocks (like Filters, Selectors and Actions) that you can combine to achieve your requirements. For example, next code demonstrates how-to create canonical implementation of "retweet" bot (more examples).
# Next code demonstrates how to create simple twitter bot that select all
# friends' tweets with your mentiones and retweet they.
import tweebot as twb
def main():
# Step 1. setup context configuration
repeater = twb.Context({
'app_name' : 'repeater',
'username' : '<YOUR ACCOUNT NAME>',
'consumer_key' : '<YOUR CONSUMER KEY>',
'consumer_secret' : '<YOUR CONSUMER SECRET>',
'access_key' : '<YOUR ACCESS KEY>',
'access_secret' : '<YOUR ACCESS SECRET>',
'timeout' : 10 * 60, # 10 min, ensure twitter api limits
'history_file' : 'history.json', # don't repeat answered tweets
})
# Step 2. enable pretty logging (stdout by default)
twb.enable_logging(repeater)
# Step 3. setup chain Selector->Filters->Action
chain = (
# Select recently tweets with current user mentions.
twb.SearchMentions(),
# Apply several filters to selected tweets:
twb.MultiPart.And(
# exclude answered, blocked and own tweets
twb.BaseFilter,
# then leave only friends tweets (friends list will be cached)
twb.UsersFilter.Friends(),
# and finally, exclude tweets with invalid content
twb.BadTweetFilter),
# And now, retweet remain tweets
twb.ReplyRetweet)
# Step 4. start processing
repeater.start_forever(*chain)
if __name__ == '__main__':
main()

Ruby's twitter gem is a very good one. You can make use of twitter API to see the available methods.
You can start with a Twitter::REST::Client like following:
twitter_client = Twitter::REST::Client.new do |config|
config.consumer_key = "YOUR_CONSUMER_KEY"
config.consumer_secret = "YOUR_CONSUMER_SECRET"
config.access_token = "YOUR_ACCESS_TOKEN"
config.access_token_secret = "YOUR_ACCESS_SECRET"
end
Then you can you use your twitter_client for various purpose. For example you can post a tweet to your profile using this:
twitter_client.update("I am posting this tweet from my Ruby program")
You can get a list of all tweets by providing the twitter username like this:
twitter_client.user_timeline("YOUR_TWITTER_USER_NAME").each do |tweet|
puts tweet.text
end
For searching for tweets, take a look at this.

Related

How to get a user's new Tweets with a Telegram Python bot while run_polling?

I'm currently developing a Telegram bot using telegram-python-bot and tweepy.
I want to create a feature that allows users of the bot to add their Twitter ID list via Telegram and have their new Tweets sent to them in real-time.
I want that the bot should be application.run_polling() to receive commands from the user, and at the same time, forwarding new tweets from Twitter users in users individual list.
When I read the tweepy documentation, I realized that I can get real-time tweets with fewer api requests if I fetch them through MyStream(auth=auth, listener=None).
But I don't know how to get both functions to work on the same file at the same time.
version
nest_asyncio-1.5.6 python_telegram_bot-20.0 tweepy-4.12.1
def main() -> None:
application = Application.builder().token("...").build()
add_list = ConversationHandler(
entry_points=[CallbackQueryHandler(input_id, pattern='input_id')],
states={ADD :[MessageHandler(filters.TEXT & ~filters.COMMAND, add)],},
fallbacks=[CallbackQueryHandler(button,pattern='back')])
application.add_handler(CommandHandler("on", on))
application.add_handler(add_list)
application.add_handler(CommandHandler("start", start))
application.add_handler(CommandHandler("list", list_setting))
application.add_handler(CommandHandler("admin", admin))
application.add_handler(CommandHandler("help", help_command))
application.add_handler(CallbackQueryHandler(button))
application.run_polling()
if __name__ == "__main__":
main()
This is my main statement and I made it work until the SIGINT(ctrl+c) came in via application.run_polling().
I want to combine the above code to run and do the following at the same time.
import tweepy
consumer_key = "..." # Twitter API Key
consumer_secret = "..." # Twitter API Secret Key
access_token = "..." # Twitter Access Key
access_token_secret = "..." # Twitter Access Secret Key
usernames = ['...']
auth = tweepy.OAuth1UserHandler(
consumer_key, consumer_secret, access_token, access_token_secret
)
# Convert screen names to user IDs
user_ids = []
for username in usernames:
user = tweepy.API(auth).get_user(screen_name=username)
user_ids.append(str(user.id))
# Create a custom stream class
class MyStream(tweepy.Stream):
def __init__(self, auth, listener=None):
super().__init__(consumer_key, consumer_secret, access_token, access_token_secret)
def on_status(self, status):
tweet_url = f"https://twitter.com/{status.user.screen_name}/status/{status.id_str}"
print(f"{status.user.screen_name} tweeted: {status.text}\n{tweet_url}")
# send message to telegram
# Create a stream object with the above class and authentication
myStream = MyStream(auth=auth, listener=None)
# Start streaming for the selected users
myStream.filter(follow=user_ids)
I also tried to use thread's interval function or python-telegram-bot's job_queue.run_repeating function,
but these seem problematic for forwarding messages in real time.
I'm desperately looking for someone to help me with this😢.

How to parse twitter using python and based on geolocation?

I am trying to parse the twitters on specific topic using Python. However, the code runs flawlessly but I need to add the geolocation as one of the core search criteria in parsing. Any suggestion how to do so?
Could you provide more example code?
If I'm understanding your question correctly, you could use the tweepy library. This could also be achieved using Get requests.
I'll provide a Tweepy example:
# enter twitter info from [Twitter developers site][2]
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_KEY = ''
ACCESS_SECRET = ''
# setup authorization
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
Location based search can be done using geocode in the tweepy.search function:
# skeleton query:
geo_based_tweets = api.search(q='some text I want to search on', geocode='long,lat,radius')
# example query about the vikings in Minneapolis:
api.search(q="vikings OR football", geocode='44.9833,-93.2667,10km', count=100)
Hope that's helpful!

Tweepy user_search api is very slow

After two days of unsuccessful attempt to use twitter gem I have decided to use tweepy of python for a task. (My original attempt was with ruby and I posted the question here)
My task is to collect all those actresses who have a verified account on twitter. I have taken the list of actresses from wikipedia.
Everything looks fine till now. I have started hitting twitter REST api with each name and I check whether it is a verified account or not.
The only problem I have is that the response is very slow. It takes about 12-15 seconds for every request. Am I doing something wrong here or is it how it is suppose to be.
Below is my code in its entirety :
import tweepy
consumer_key = 'xxx'
consumer_secret = 'xxx'
access_token_key = 'xx-xx'
access_token_secret = 'xxx'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token_key, access_token_secret)
api = tweepy.API(auth)
actresses = []
f = open('final','r')
for line in f:
actresses.append(line)
f.close()
print actresses
for actress in actresses:
print actress
users = api.search_users(actress)
for u in users:
if u.verified == True and u.name == actress:
print u.name + " === https://twitter.com/" + u.screen_name
Also is there any better way to extract the verified actresses using that list?
Unfortunately, there is no faster way to do it, given that you only know the actresses' full names, and not their screen names. Each request will take a long time, as Twitter needs to return the results of users matching the query (there may be quite a few). Each one needs to be loaded and examined, which can take a while, depending on how many results were returned.

How do I setup geocoder with google_premier?

I've read the docs for the geocoder gem which state you can set a key, client and channel when using Google Premier.
According to some other posts I've read here, it's now possible to use an API key and still not pay as long as you're below the free threshold. We need to do this as we host with Heroku and we keep hitting our daily limit. We're not ourselves, but without any sort of other identification, we're probably reaching a limit identified by IP shared with other Heroku sites. Using a key will help identify us and therefore keep us from hitting a limit.
However, when I look at the sign up pages for the Google API, there are a baffling array of client ids, api keys and secrets, for installed apps, web apps and so on. Which combination is the one required to make geocoder burst into life?
To answer the question :
When subscribing to Google Premier, you should have received a client id starting by gme- and a key (see https://developers.google.com/maps/documentation/business/articles/prelaunch_checklist#welcome_letter)
The third argument needed by geocoder is the channel, that can be any kind of string (see https://developers.google.com/maps/documentation/business/guide#Channels )
You need to add the list of authorised urls originating the requests in the Google Portal (see https://developers.google.com/maps/documentation/business/guide#URLs ).
From the Geocoder doc, you can use a setting like :
# -*- encoding : utf-8 -*-
Geocoder.configure do |config|
config.lookup = :google_premier
config.api_key = ["gme-client-id","key", "channel"]
config.timeout = 10
config.units = :km
end
But it would probably be a better choice to use client-side geocoding like recommended here : https://developers.google.com/maps/articles/geocodestrat?hl=fr#client
This worked for me:
Geocoder.configure(
:lookup => :google_premier,
:api_key => [ 'GOOGLE_CRYPTO_KEY', 'GOOGLE_CLIENT_ID', 'GOOGLE_CHANNEL' ],
:timeout => 5,
:units => :km,
)
You'll need to substitute in the corresponding values from your Google Maps for Business welcome email. Channel is a value of your choosing.

Using Google's Audit API to monitor google apps email

I need to get some admin users using google apps gmail the ability to monitor their employees email. Have you used Google's Audit API to do this.
I wish there there was a way for the admins to just click a view my users email but that doesn't be the case.
If it matters the application is a rails app. The email is completely done on googles mail through google apps. Anyone that has done this any advice would be helpful.
Update! 500 points for this one!
I'm using ruby on rails hosting an app on heroku. The email is completely hosted with google apps standard, not business so we will have to upgrade, and the DNS is with zerigo which you already know if you use heroku.
Well, I hadn't planned on extending the gdata-ruby-util gem :), but here's some code that could be used for the Google Audit API based on Google's documentation. I only wrote a create_monitor_on method, but the rest are pretty easy to get.
Let me know if it works or needs any rewrites and I'll update it here:
class Audit < GData::Client::Base
attr_accessor :store_at
def initialize(options = {})
options[:clientlogin_service] ||= 'apps'
options[:authsub_scope] ||= 'https://apps-apis.google.com/a/feeds/compliance/audit/'
super(options)
end
def create_monitor_on(email_address)
user_name, domain_name = email_address.split('#')
entry = <<-EOF
<atom:entry xmlns:atom='http://www.w3.org/2005/Atom' xmlns:apps='http://schemas.google.com/apps/2006'>
<apps:property name='destUserName' value='#{#store_at}'/>
<apps:property name='beginDate' value=''/>
<apps:property name='endDate' value='2019-06-30 23:20'/>
<apps:property name='incomingEmailMonitorLevel' value='FULL_MESSAGE'/>
<apps:property name='outgoingEmailMonitorLevel' value='FULL_MESSAGE'/>
<apps:property name='draftMonitorLevel' value='FULL_MESSAGE'/>
<apps:property name='chatMonitorLevel' value='FULL_MESSAGE'/>
</atom:entry>
EOF
return true if post('https://apps-apis.google.com/a/feeds/compliance/audit/mail/monitor/'+domain_name+'/'+user_name, entry).status_code == 201
false
end
end
Then use it elsewhere like this:
auditor = Audit.new
auditor.store_at = 'this-username'
auditor.clientlogin(username, password)
render :success if auditor.create_monitor_on('email-address#my-domain.com')
My suggestion is to create one core email address that all the email monitors are sent to, so your admins' inboxes aren't slammed with everyone else's mail. Then in your Rails app, use Net::IMAP to download the messages you want from that master email account. i.e., you can create a link that says "View Joe's Email" and the method does something like this:
require 'net/imap'
imap = Net::IMAP.new('imap.gmail.com', 993, true)
imap.login('this-username#my-domain.com', password)
imap.select('INBOX')
messages = []
imap.search(["TO", "joe#email.com").each do |msg_id|
msg = imap.fetch(msg_id, "(UID RFC822.SIZE ENVELOPE BODY[TEXT])")[0]
body = msg.attr["BODY[TEXT]"]
env = imap.fetch(msg_id, "ENVELOPE")[0].attr["ENVELOPE"]
messages << {:subject => env.subject, :from => env.from[0].name, :body => body }
end
imap.logout
imap.disconnect
Then you can put those messages in your view -- or send them all in one bulk email, or whatever you want to do.

Resources