I'm trying to post a video on behalf of someone through Twython.
I have followed the Twython video upload docs to achieve it, but it fails with an error on the upload_video() method that their github page marked as solved (still happens to me though).
I tried an SO solution I've found, but it fails also with TypeError: post() got an unexpected keyword argument 'files'.
So... Is there any way to achieve that using Twython?
My code:
from twython import Twython
twitter = Twython(...)
video = open(video_path, 'rb')
response = twitter.upload_video(media=video, media_type='video/mp4')
twitter.update_status(status='Checkout this cool video!', media_ids=[response['media_id']])
Result
.
.
response = twitter.upload_video(media=video, media_type='video/mp4')
File "/usr/local/lib/python3.5/dist-packages/twython/endpoints.py", line 184, in upload_video
media_chunk.write(data)
TypeError: string argument expected, got 'bytes'
Related
I am working on a video analysis project which requires to download videos from youtube and upload them on google cloud storage. I could not figure out a way to directly upload them to gcs thus, I tried to download them on local machine and then upload them to gcs.
I went through multiple articles on stackoverflow regarding the same and with the help of those I was able to come up with the following script.
I went through multiple articles on stackoverflow regarding the same such as
python: get all youtube video urls of a channel and
Download YouTube video using Python to a certain directory
and with the help of those I was able to come up with the following script.
import urllib.request
import json
from pytube import YouTube
import pickle
def get_all_video_in_channel(channel_id):
api_key = 'AIzaSyCK9eQlD1ptx0SKMsmL0srmL2ua9_EuwSs'
base_video_url = 'https://www.youtube.com/watch?v='
base_search_url = 'https://www.googleapis.com/youtube/v3/search?'
first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)
video_links = []
url = first_url
while True:
inp = urllib.request.urlopen(url)
resp = json.load(inp)
for i in resp['items']:
if i['id']['kind'] == "youtube#video":
video_links.append(base_video_url + i['id']['videoId'])
try:
next_page_token = resp['nextPageToken']
url = first_url + '&pageToken={}'.format(next_page_token)
except:
break
return video_links
#Load the file containing all the youtube video url
load_url = get_all_video_in_channel(channel_id)
#Access all the youtube url in the list and store them on local machine. Need to figure out if there is a way to directly upload them to gcs
for i in range(0,len(load_url)):
YouTube('load_url[i]').streams.first().download('C:/Users/Tushar/Documents/Serato_Video_Intelligence/youtube_videos')
It works only for the first two video urls and then fails with the below error
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:\Python37\lib\site-packages\pytube\streams.py", line 217, in download
bytes_remaining = self.filesize
File "C:\Python37\lib\site-packages\pytube\streams.py", line 164, in filesize
headers = request.get(self.url, headers=True)
File "C:\Python37\lib\site-packages\pytube\request.py", line 21, in get
response = urlopen(url)
File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Python37\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Python37\lib\urllib\request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python37\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "C:\Python37\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
I was hoping if someone can please help me understand what is going wrong here and if could help me to resolve this issue. I desperately need this and have been unable to resolve the issue for some time.
Thanks a lot in advance !!
P.S. If possible, is there a way to directly upload them to gcs.
Seems like you could fall in a conflict with YouTube's the terms of service, I suggest to you check this document and put attention on the section number 5 letter B. [1]
[1]https://www.youtube.com/static?gl=US&template=terms
I am doing an OpenCV project and I cant seem to find a way to send the frames using telepot module to my telegram. I've already setup the telegram bot.
------------Opencv processing------
cv2.imshow('Object detector', frame)
bot.sendPhoto(238460030, (frame,'rb'))
I get this error:
AttributeError: 'str' object has no attribute 'read'
If im not mistaken you need to take the photo first ,then you need to send the file with:
photo = ("img.jpeg",mode= 'rb')
I'm trying to scrape this link using Jsoup with Kotlin/Java. And I have problem in scrapping players part (under Current Squad). Could anyone parse it?
You can not access the information directly using only the response from that link.
You can make a JSON object with the http response from https://stats.fn.sportradar.com/betsgi/en/America:Argentina:Buenos_Aires/gismo/stats_team_squad/2817 and https://stats.fn.sportradar.com/betsgi/en/America:Argentina:Buenos_Aires/gismo/stats_teamplayer_facts/2817/42556.
As an example in python you can get the minutes played by each player as follows:
import urllib
import json
f=urllib.urlopen('https://stats.fn.sportradar.com/betsgi/en/America:Argentina:Buenos_Aires/gismo/stats_team_squad/2817')
f2=urllib.urlopen('https://stats.fn.sportradar.com/betsgi/en/America:Argentina:Buenos_Aires/gismo/stats_teamplayer_facts/2817/42556')
j=json.loads(f.read())
j2=json.loads(f2.read())
plrs=j['doc'][0]['data']['players']
for plr in plrs:
print '========================='
print plr['name']
try:
print 'minutes played:' +str(j2['doc'][0]['data'][str(plr['_id'])]['stats']['total']['minutes_played'])
except KeyError, e:
pass
I am trying to save a simple template to pdf using the rendering plugin, but I cannot get it to work no matter what I try. All I need is for it to save a file within the file system on the server and redirect to a different page.
At the minute the pdf template does not need any parameters as it just prints hello world. Once I get this working I will attempt to add some data.
I am getting errors saying I need to specify a controller if no '/' is appended. But I have tried adding this to no avail. Plus I don't understand which controller it needs as I have tried specifying the controller this action is declared.
Can someone please have a look at this and tell me what I'm doing wrong?
RenderingService pdfRenderingService
def displayPDFSummary = {
ByteArrayOutputStream bytes = pdfRenderingService.render(template: "_pdfTemplate", controller:"RSSCustomerOrder", model: [origSessionId:params.origSessionId])
def fos= new FileOutputStream('NewTestFile.pdf')
fos.write(bytes)
fos.close()
render(template: "_pdfTemplate", params: [origSessionId:params.origSessionId])
}
I am getting the following error messages in the console:
groovy.lang.MissingMethodException: No signature of method: java.io.FileOutputStream.write() is applicable for argument types: (java.io.ByteArrayOutputStream)
(Then prints contents of template...)
Possible solutions: write([B), write(int), write([B), write(int), wait(), wait(long)
Did you look at the FileOutputStream docs? There's no write(OutputStream) method.
Try fos.write(bytes.toByteArray()). Also, bytes.writeTo(fos) may work.
I'm building a rails app that takes information about products from an XML datafeed hosted on a 3rd party server. This XML is sent gzipped, and I'm having serious difficulty in getting anywhere with it.
I've spent a fair bit of time with Google on this, but the results of my searching seem to be more about Sending Gzipped output rather than receiving a Gzipped input.
The closed I've come to a solution came from StackOverflow, but I'm still getting errors.
What I'm trying to do in the first instance is print the XML data to the browser, then I can start with the processing of it. Here's my current code:
def load_data
url = "http://xml.domain.com/datafeed/"
xml_input = Net::HTTP.get(URI.parse(url))
zstream = Zlib::Inflate.new
#xml_output = zstream.inflate(xml_input)
zstream.finish
zstream.close
end
The error I'm getting from it is:
Zlib::DataError in Cron/get datafeedController#load_data
incorrect header check
I guess this means that the data isn't in the format that is expected, but I can't find information about how to do this properly anywhere. Two things I've ruled out is that the URL is valid and the response is Gzipped, but I'm stuck with how to get past this.
Any help would be greatly appreciated :-)
Sorted!
file = Net::HTTP.get(URI.parse(url))
gz = Zlib::GzipReader.new(StringIO.new(file))
whole_xml = gz.read
Then to load into Hpricot to do the XML parsing:
hp = Hpricot(whole_xml)