How can i get the size of dailymotion videos? - dailymotion-api

I want to save data traffic during the time that user is watching dailymotion video. Is anyone having an idea? Thanks in advance

This info is not available in the API. Note that the data used will depend on the video quality (1080px, 720px, 480px, etc), and that may vary from a user to another, depending on their device, bandwidth, etc. It is automatically chosen by the video player but the user can also decide to specify a quality.
You will see all available fields for a video object in the documentation at: https://developer.dailymotion.com/api#video-fields

Just make a getimagesize of the thumbnail_url !

Related

YouTube, Content ID; Is there a way to check if some content is under copyright?

I have a live stream that I set up and monitor through YouTube APIs,
It has happened to me that I get the following warning:
We've detected copyrighted audio in your stream. Your stream may be temporarily blocked.
Is there a way to check this beforehand, with an API or something, so that I could prevent this content from being streame, before it reaches YouTube?
Note: I know the pedantic answer to this would be: all content is under copyright. But please note that many content owners allow their media to be used as long as they can monetize from it (that's pretty much the spirit of Content ID). Some others don't allow it and that's when you may get a copyright notice.

Store data in video frames while capturing the video in iOS

How to store the data like time or latitude in video frames per second while capturing the video and retrieve this data back from the saved video in iOS?
Most video formats include video MetaData, describing the whole video at the asset level. When using AVAssetWriter, you can add MetaData items before writing to associate this with the output file. https://developer.apple.com/reference/avfoundation/avassetwriter#//apple_ref/occ/instp/AVAssetWriter/metadata
There are common keys (https://developer.apple.com/reference/avfoundation/1668870-av_foundation_metadata_key_const/1669056-common_metadata_keys) you can use to store the information if you like.
Note this is only at the file level, not per frame.
If you want to store information at a "frames per second" type time refernece then you could build a custom solution, interacting with the buffers "vended" by AVFoundation Recording locations in time domain with AVFoundation It's possible to then write your own custom storage for that information that's synchronised to the video file and you would need to read it back and process it yourself.
I don't believe there's a way to encapsulate that "per frame location info" within the actual video file itself (you could perhaps do a hack and repurpose a subtitles AVAssetTrack and write the info, then pull it off but not display it - this would be unpredictable when video was played on other devices however).
ADDITIONAL INFO
Following on from a comment a year after I wrote this, I did some more investigation. While you could use and abuse the subtitle track like suggested, a better solution is to use the AVAsset metadata type which is specifically for this. https://developer.apple.com/documentation/avfoundation/avmediatype/1390709-subtitle
There are many different AVAssetTrack types which allow you to time data to a point on a video including
Audio
closedCaption
depthData (BETA at time of edit)
metaData <- This is probably what you want
metaDataObject <- In combination with this one too
muxed
text
timecode
video

iOS UIImagePickerController for Video URL Only

I'm using the UIImagePickerController to select a video from the the device's camera roll. However, I'm not interested in viewing the video at this time; I want to save the URL (in Core Data) so that when the user chooses the name of the video from, for example, a pickerView, the video will load and play at that time.
My understanding (which may be wrong) is the UIImagePickerController makes a compressed copy into the sandbox and provides two different URLS (in the info dictionary). It is kind of a guess at this point, but what I think is:
UIImagePickerControllerMediaURL is the url that points to the original video; and
UIImagePickerControllerReferenceURL is the url that points to the copy.
Here are my questions:
a) Is my assumption correct as to what the two URLs point to, and can I count on the ReferenceURL to point to the selected video so long as it is on the device's camera roll?
and
b) Under the circumstances, is there any way to avoid the compression? From reading on SO, I'm thinking there may not be, but I haven't really seen any posts that really relate exactly to what I'm doing. The structure of my app is such that there could be a lot of these videos and users will not want to get rid of the original, so there is no point in having both the original and compressed version around.
All I'm interested in is a URL I can use to access the video in the camera roll. I will also have to get a thumbnail of it to store with the URL, but I think I see how to do that.
Any help on this will be greatly appreciated.
If you only want the URL to access the video, then you can use UIImagePickerControllerMediaURL this specifies the filesystem URL for the movie (if editing is enabled, this points to the edited/trimmed video).If you want the original video URL you can se UIImagePickerControllerReferenceURL this is the Assets Library URL for the original version of the video. (The truly selected item, without editing). You can, of course, establish controller.allowsEditing = NO to avoid user to edit the video, getting in UIImagePickerControllerMediaURL the URL of the original unedited video.
AFAIK there is no compression applied to the recorded/selected video by default, this only happens if you press the Share button and try to send the file over MMS, MobileMe, etc., just make sure you establish controller.videoQuality = UIImagePickerControllerQualityTypeHigh to get highest quality.

Watch video in the time they are uploaded

It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.

How can I find the number of seconds a YouTube video was played?

I have access to a proxy server and I can find out the time a video was requested. The log has the form (time, IP, URL). I want somehow figure out for how many seconds did a particular user using IP address A watched a YouTube video. Any suggestions?
If you only have access to requests, you obviously can't tell the difference if someone just loaded a video or watched it.
So, the best you can do is to come up with a set of heuristics that tries to 'guess' it by observing certain actions of the user. Here are a few ideas:
Does you log count the requests for the video buffer itself? If it does, you can see how much of the video was actually loaded, and the watched time can't be more than that.
If you (quite naively, I guess) assume that they're finished watching when they request another video URL, you can use this as your trigger for ending a 'video session'.
Install Wireshark or similar and start watching activity from YouTube during the video. Can you identify if there's a request when advertising is shown, or the related videos are displayed when the video finishes?
In all honesty, though, I think it will be virtually impossible trying to derive such an specific metric like seconds watched from such limited data as the point in time a video was requested. Just think of what could mess up any strategy you come up with: the user could load several videos in different tabs in a burst, or he could load a video page, pause it and forget it for several minutes or hours before he does watch it.
In short: I don't think you'll get a reliable guess using only the data you have, but if you absolutely must at least try, observing network activity between client and YouTube that only happens when a video is in the 'playing state' (pulling advertisings, related videos, some sort of internal YouTube logging, etc) is probably your best bet. Even that probably won't have a granularity nearly close to seconds, though.

Resources