First of all let me say that I am not asking "how to check a status of a previously uploaded video". My question is about getting the status of a video from the response of an upload. I am using the dotnet client and right after an upload is completed the response is a Google.Apis.Youtube.v3.Data.Video object. That object has a property called Status that contains the folllowing fields among others: RejectionReason, PrivacyStatus and UploadStatus. The problem is that only the PrivacyStatus and UploadStatus have values. The RejectionReason is null. Jeff Posnick mentioned that (see whole thread here)
There's no way to determine whether a video is a duplicate or not as part of the upload response, because YouTube doesn't know whether the video is a duplicate until it has processed the video, and processing takes place after the upload has completed
That's a bit strange because when I issue a video.list right after the upload the API returns a status for the uploaded video. So even if the video is not published and it seems like YouTube is still indexing/processing the video it already knows the status of the "just uploaded video". So why can it not return the status as part of the response?
It's important that the response include the status because if not then, in the code, we have to do two API calls each time we do an upload: (1) insertmediaupload then (2) video.list. It'd a very costly operation especially that not all uploads will be duplicate.
EDIT
As a response to Jeff-Posnick's comment below the question is "can the API wait for a few seconds and check if the processing is done and then include the status as part of the response?".
I came up with that question because of the behavior I've seen: That's a bit strange because when I issue a video.list right after the upload the API returns a status for the uploaded video. But I've been playing around with the API and got inconsistent results. I have uploaded the same video over and over and sometimes there is "a duplicate" status and sometimes there is none. Please take note of the steps I took, #1 and #2 above. There are no other codes in between of those two API calls.
I'm not sure what the question is here.
You seem to understand the limitations of the way uploads work with the YouTube API, and those limitations still apply with v3 of the YouTube Data API. At the time that a response is returned from the videos.insert() request, the status of the video is not known, because it hasn't been processed yet. The actual processing might happen a second or two after the video has been uploaded, or it might happen a few minutes (or longer) after the video has been uploaded, especially for larger video files. It's not done in real time, and it's not reasonable to expect the videos.insert() API call to block waiting for the processing to be finished.
I'd disagree with your assessment that performing a videos.list(id=...,part=status) is a "very costly operation". The amount of bandwidth and YouTube API quota that consumes is minimal compared to an actual video upload. It would be nice to provide a way to communicate back the processing status independent of the videos.insert() call via some sort of callback or push update mechanism, but we don't have anything available like that at this time. You have to poll videos.list(id=...,part=status).
Related
I'm trying to learn how to do pseudo streaming for MP4 files. I can't think of a good way to do it, but I just found a great example app has similar implementation (except I don't understand how it does it yet)
Here's the scenario:
Alice can send a video to Bob in the app
Bob can open it immediately and see Alice's video, from beginning, while Alice is still recording it
Also, Bob can choose to view the video later after Alice finished recording. But Bob should be able to view the video instantly without waiting too much time, even when the whole size of the video is large.
Thus, my hunch is, it's using some sort of pseudo streaming for mp4.
Here's the screenshots of the requests Alice's phone makes while using the example app:
The screenshot suggests, the example app is making an array of PATCH requests to their server, every 0.x seconds. And finally, the very last request will make a PATCH to update the moov information for this MP4.
Thus my question is, how is this implemented (any educated guess will be welcomed)? Or is there any sort of existing protocol/iOS encoder that I didn't know is doing this already?
Thanks a lot!
Reading the text of your question rather than the title, I think there are a number of likely steps:
Alice is recording video
She is ending the video to a streaming server
Alice notifies Bob that the stream is available and sends the URL on the streaming server that Bob can access to retrieve the stream
Bob's video client requests the stream, using range request to download it chunk by chunk
Have a server in the middle like this is a typical approach for any stream which may have more than one client watching it.
More sophisticated streaming servers may also support delivery the stream in different bit rates and even encoded with different codecs for maximum device reach.
There are commercial (e.g. https://www.wowza.com) and open source streaming servers (e.g. https://gstreamer.freedesktop.org) you can look at to get more info on streaming servers and to see some examples.
Is there a possibility that a YouTube video (streamed with help from YouTube API) can temporarily be in some sort of banned state, making it dispatch errorEvents with errorCodes 100, 101, 150, only to eventually come back to a normal state that doesn't dispatch errorEvents?
In my implementation, I store some information about youtube videos in a database. I delete information about videos that dispatch these errorEvents, since videos that do not work are not in my interest to show.
Now, the problem for me is if there is a video that only dispatches errorEvents for a short period of time, as it gets deleted from my system, but actually still works.
Is a common thing that videos temporarily or periodically dispatch errorEvents?
The Player API documentation gives an overview of reasons why onError might be triggered. You should be able to interpret from that whether the scenario you describe is likely or not.
For example, error 100 can be triggered when a video is private. It's obviously possible for the owner of a video to flip it back and forth between private and public, so whether you want to ignore that video id in the future is up to you—it depends on how likely you think it is that the video will be made public again.
Error 101 is triggered when a video isn't enabled for embedding. That's another option that can be changed by the owner of the video at any point. It's again up to you to determine whether you want to take the current state of the video into account and disregard that video id, or leave it in your system on the off chance that the owner makes it embeddable in the future.
I'm seeing issues where adding multiple entries to a playlist in a short amount of time seems to fail regularly without any error responses.
I'm using the json-c format with version 2.1 of the api. If I send POST requests to add 7 videos entries to a playlist then I see results of between 3-5 of them actually being added to the playlist.
I am getting back a 201 created response from the api for all requests.
Here's what a request looks like:
{"data":{"position":0,"video":{"duration":0,"id":"5gYXlTe0JTk","itemsPerPage":0,"rating":0,"startIndex":0,"totalItems":0}}}
and here's the response:
{"apiVersion":"2.1","data":{"id":"PLL_faWZNDjUU42ieNrViacdvqvG714P4QjvSDgGRg1kc","position":4,"author":"Lance Andersen","video":{"id":"5gYXlTe0JTk","uploaded":"2012-08-16T19:27:19.000Z","updated":"2012-09-28T20:20:39.000Z","uploader":"usanahealthsciences","category":"Education","title":"What other products does USANA offer?","description":"Discover USANA's other high-quality products: the Sens skin and hair care line, USANA Foods, the RESET weight-management program, and Rev3 Energy.","thumbnail":{"sqDefault":"http://i.ytimg.com/vi/5gYXlTe0JTk/default.jpg","hqDefault":"http://i.ytimg.com/vi/5gYXlTe0JTk/hqdefault.jpg"},"player":{"default":"http://www.youtube.com/watch?v=5gYXlTe0JTk&feature=youtube_gdata_player","mobile":"http://m.youtube.com/details?v=5gYXlTe0JTk"},"content":{"5":"http://www.youtube.com/v/5gYXlTe0JTk?version=3&f=playlists&d=Af8Xujyi4mT-Oo3oyndWLP8O88HsQjpE1a8d1GxQnGDm&app=youtube_gdata","1":"rtsp://v6.cache3.c.youtube.com/CkgLENy73wIaPwk5JbQ3lRcG5hMYDSANFEgGUglwbGF5bGlzdHNyIQH_F7o8ouJk_jqN6Mp3Viz_DvPB7EI6RNWvHdRsUJxg5gw=/0/0/0/video.3gp","6":"rtsp://v7.cache7.c.youtube.com/CkgLENy73wIaPwk5JbQ3lRcG5hMYESARFEgGUglwbGF5bGlzdHNyIQH_F7o8ouJk_jqN6Mp3Viz_DvPB7EI6RNWvHdRsUJxg5gw=/0/0/0/video.3gp"},"duration":72,"aspectRatio":"widescreen","rating":5.0,"likeCount":"6","ratingCount":6,"viewCount":1983,"favoriteCount":0,"commentCount":0,"accessControl":{"comment":"allowed","commentVote":"allowed","videoRespond":"moderated","rate":"allowed","embed":"allowed","list":"allowed","autoPlay":"allowed","syndicate":"allowed"}},"canEdit":true}}
The problem doesn't change if I set the position attribute.
If I send them sequentially with a 5 second delay between them then the results are more reliable with 6 of the 7 usually making it on the playlist.
It seems like there is a race condition happening on the api server side.
I'm not sure how to handle this problem since I am seeing zero errors in the api call responses.
I have considered doing batch processing, but can't find any documentation on it for the json-c format. I'm not sure it that would make a difference anyways.
Is there a solution to reliably adding playlist entries to a playlist?
This was fixed in and update to the youtube data apis around the 25th of October.
It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.
I have access to a proxy server and I can find out the time a video was requested. The log has the form (time, IP, URL). I want somehow figure out for how many seconds did a particular user using IP address A watched a YouTube video. Any suggestions?
If you only have access to requests, you obviously can't tell the difference if someone just loaded a video or watched it.
So, the best you can do is to come up with a set of heuristics that tries to 'guess' it by observing certain actions of the user. Here are a few ideas:
Does you log count the requests for the video buffer itself? If it does, you can see how much of the video was actually loaded, and the watched time can't be more than that.
If you (quite naively, I guess) assume that they're finished watching when they request another video URL, you can use this as your trigger for ending a 'video session'.
Install Wireshark or similar and start watching activity from YouTube during the video. Can you identify if there's a request when advertising is shown, or the related videos are displayed when the video finishes?
In all honesty, though, I think it will be virtually impossible trying to derive such an specific metric like seconds watched from such limited data as the point in time a video was requested. Just think of what could mess up any strategy you come up with: the user could load several videos in different tabs in a burst, or he could load a video page, pause it and forget it for several minutes or hours before he does watch it.
In short: I don't think you'll get a reliable guess using only the data you have, but if you absolutely must at least try, observing network activity between client and YouTube that only happens when a video is in the 'playing state' (pulling advertisings, related videos, some sort of internal YouTube logging, etc) is probably your best bet. Even that probably won't have a granularity nearly close to seconds, though.