Twitter API app-only authorization error - twitter

I am following this https://dev.twitter.com/oauth/application-only
def get_bearer_token(self):
data = {"grant_type": "client_credentials"}
headers = {"Authorization": "Basic " + self.encoded_cred}
r = requests.post("https://api.twitter.com/oauth2/token",
data=data, headers=headers)
res = r.json()
if res.get('token_type') != 'bearer':
raise Exception("invalid type")
self.access_token = res['access_token']
def search(self, q):
data = {"count": 100, "q": q}
headers = {"Authorization": "Bearer " + self.access_token}
r = requests.get("https://api.twitter.com/1.1/search/tweets.json", data=data, headers=headers)
print r.status_code
print r.text
I was able to get an access_token but the search API call returns 400 with no content. Any idea?
Similar to this https://stackoverflow.com/questions/24102409/twitter-application-only-auth-returning-empty-response but no answer yet

Related

OpenAI API returns an "unauthorized" error when the key is correct

I'm coding a Discord bot in Lua and I thought it would be fun to implement OpenAI's api somehow, and I've got everything right except I keep getting a 401 error. Here's a portion of my code
coroutine.wrap(function()
local s,e = pcall(function()
local Headers = {
["Authorization"] = "Bearer "..key,
["Content-Type"] = "application/json",
}
local Body = json.encode({
model = "text-davinci-002",
prompt = "Human: ".. table.concat(Args, " ") .. "\n\nAI:",
temperature = 0.9,
max_tokens = 47, --150
top_p = 1,
frequency_penalty = 0.0,
presence_penalty = 0.6,
stop = {" Human:", " AI:"}
})
res,body = coro.request("POST", link, Headers, Body, 5000)
if res == nil then
Message:reply("didnt return anything")
return
end
if res.code < 200 or res.code >= 300 then
Message:reply("Failed to send request: " .. res.reason); return --Always ends up here "Failed to send request: Unauthorized"
end
Message:reply("Request sent successfully!")
end)
end)()
The "key" is the API key I got from the website. I feel like the mistake is simple and stupid but regardless I'm stuck
It's good code, though I'd do some checks on the types before you validate the codes.
Another reason behind this, is some domains may require a proxy rather than a direct connection.
coroutine.resume(coroutine.create(function()
local headers = {
Authorization = "Bearer " .. key,
["Content-Type"] = "application/json",
}
local body = json.encode({
model = "text-davinci-002",
prompt = "Human: " .. table.concat(Args, " ") .. "\n\nAI:",
temperature = 0.9,
max_tokens = 47, --150
top_p = 1,
frequency_penalty = 0.0,
presence_penalty = 0.6,
stop = { " Human:", " AI:" },
})
local success, http_result, http_body = pcall(coro.request, "POST", link, headers, body, 5e3)
if success ~= true then
return error(http_result, 0)
elseif type(http_result) == "table" and type(http_result.code) == "number" and http_result.code < 200 or http_result.code >= 300 then
return Message:reply("Failed to send request: " .. type(http_result.reason) == "string" and http_result.reason or "No reason provided.")
end
return Message:reply("Request sent successfully!")
end))

youtube search change the pages token

While requesting youtube search functionality the token pages are modifying between request (look at prev and next token)
the code of the cycle is the next
done = "N"
while (done == "N") :
request = youtube.search().list(
part="snippet"
, q="crime|airport delay|traffic accident|home invasion"
, publishedBefore =publish_end_date
, publishedAfter =publish_start_date
, maxResults=50
, pageToken=page_token
, type="video"
)
response = request.execute()
print ('Total results: ' + str(response["pageInfo"]["totalResults"]))
if 'prevPageToken' in response:
print ('prevPageToken: ' + response["prevPageToken"])
else:
print ('prevPageToken: NO_MORE_PAGES')
if 'nextPageToken' in response:
page_token = str(response["nextPageToken"])
print ('nextPageToken: ' + page_token)
else:
page_token = ""
done = "Y"
print ('nextPageToken: NO_MORE_PAGES')
num_posts = response["pageInfo"]["resultsPerPage"]
if num_posts > batch_size:
num_posts=batch_size
print ('Number of posts to download: ' + str(num_posts))

why I use AsyncHTTPClient timeout?

#tornado.web.authenticated
#tornado.web.asynchronous
#tornado.gen.coroutine
def post(self):
try:
files_body = self.request.files['file']
except:
error_msg = u"failed to upload file"
error_msg = self.handle_error(error_msg)
self.finish(dict(is_succ=False, error_msg=error_msg))
return
file_ = files_body[0]
filename = file_['filename']
# asynchronous request, obtain OCR info
files = [('image', filename, file_['body'])]
fields = (('api_key', config.OCR_API_KEY), ('api_secret', config.OCR_API_SECRET))
content_type, body = encode_multipart_formdata(fields, files)
headers = {"Content-Type": content_type, 'content-length': str(len(body))}
request = tornado.httpclient.HTTPRequest(config.OCR_HOST, method="POST", headers=headers, body=body,
validate_cert=False, request_timeout = 30)
try:
response = yield tornado.httpclient.AsyncHTTPClient().fetch(request)
except Exception, e:
logging.error(u'orc timeout {}'.format(e))
error_msg = u"OCR timeout"
error_msg = self.handle_error(error_msg)
self.finish(dict(is_succ=False, error_msg=error_msg))
return
if not response.error and response.body:
data = json.loads(response.body)
self.extra_info(data)
result = dict(is_succ=True, error_msg=u"", data=data)
else:
result = dict(is_succ=False, error_msg=u"request timeout", data={})
self.finish(result)
as the code shown, I want to write an api to handle id-card picture upload, and post a request to third part interface for getting the information of id-card.
This api can run well on my PC,however it timeouts on Testing Server. I cannot figure out where the problem is.

Ruby HTTPS Post issue

I have two codes (variable info masked intentionally), the first one I receive the response with 200 code return, but second one I get 403 forbidden, any idea?
def get_token()
http = Net::HTTP.new(server, 443)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE if http.use_ssl?
#path(a.k.a) ->www.mysite.com/some_POST_handling_script.rb'
path = '/rest/fastlogin/v1.0?app_key=' + app_key + '&username=%2B' + username + '&format=json'
puts path
headers = {'Content-Type'=> 'application/x-www-form-urlencoded', 'Authorization' => password }
resp, data = http.post(path, data, headers)
puts 'Code = ' + resp.code
puts 'Message = ' + resp.message
resp.each {|key, val| puts key + ' = ' + val}
puts data
puts JSON.parse(resp.body)["access_token"]
result = {}
result["code"] = resp.code
result["token"] = JSON.parse(resp.body)["access_token"]
print result
return result
end
def get_token1()
path = '/rest/fastlogin/v1.0?app_key=' + app_key + '&username=%2B' + username + '&format=json'
uri = URI.parse('https://' + server + path)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE if http.use_ssl?
req = Net::HTTP::Post.new(uri.path)
req["Authorization"] = password
puts uri.host
puts uri.path
puts uri.port
resp,data = http.request(req)
print resp
end
I think this is authentication issue. Credentials which you provide are wrong. That's why 403 forbidden error is occurring.

How do I use the Bilty API to shorten a list of URLs?

I have an account with Bitly which personalizes my URL shortening. How can I use the API to sign in and shorten a list of URLs?
Here is my solution in python using python requests library
import base64
import requests
import json
credentials = 'USERNAME:PASSWORD'
urls = ['www.google.com', 'www.google.co.uk', 'www.google.fr']
def getShortURLs(urls):
token = auth()
return shortenURLs(token, urls)
def auth():
base_auth = "https://api-ssl.bitly.com/oauth/access_token"
headers = {'Authorization': 'Basic ' + base64.b64encode(credentials)}
resp = requests.post(base_auth, headers=headers)
return resp.content
def shortenURLs(token, long_urls):
base = 'https://api-ssl.bitly.com/v3/shorten'
short_urls = []
for long_url in long_urls:
if long_url:
params = {'access_token':token, 'longUrl' : 'https://' + long_url}
response = requests.get(base, params=params)
r = json.loads(response.content)
short_urls.append(r['data']['url'])
return short_urls

Resources