We use the YouTube Data API to upload videos, but when we check the status of the uploaded video with 1 minute period, it will take more than 15-30 times until the status change to success.
Our video size is less than 5M.
what can we do to make it faster?
We found that the tagSuggestionsAvailability and editorSuggestionsAvailability is always "inProgress" while the processingDetails.processingStatus maybe "succeeded" or "processing".
We also found that if the processingDetails.processingProgress.partsProcessed is 1000 and processingDetails.processingProgress.timeLeftMs is 0 while the processingDetails.processingStatus is "processing", maybe the video is succeeded.
So while the processingDetails.processingStatus is "processing", if the availablility is all "available" except tagSuggestionsAvailability and editorSuggestionsAvailability, or processingDetails.processingProgress.partsProcessed is 1000 and processingDetails.processingProgress.timeLeftMs is 0, we think the video is successed.
It works for me, sometimes maybe fail, but we just need one minute to wait the video status change. Does anyone have better ideas for this?
Related
I am trying to display the sleep from the HealthKit. I am using AppCore
to display other HKQuantity. I use the following for HKQuantities like Steps, etc.
[[APCScoring alloc] initWithHealthKitQuantityType:HKQuantityType
unit:[HKUnit countUnit]
numberOfDays:-kNumberOfDaysToDisplay];
My issue is that sleep data is not a HealthKitQuantityType and I can't use HKStatisticsCollectionQuery.
I am looking to display HKCategoryValueSleepAnalysisAsleep.
As you probably figured out, Healthkit sleep analysis is not quantitative.
As describe in the Apple documentation you have only 3 states: inBed, aSleep or awake.
I've got same question and to work around, I count minutes aSleep versus minutes inBed and minutes awake (depending of what's relevant to you) based on startDate and endDate. Then I display the result in an histogram chart or similar.
If you're looking for a way to fetch or save sleep analysis data with Healthkit, I've written a post a year ago here that can eventually help you.
I have a YouTube Video (which was NOT uploaded by myself) with 50 min length:
The video talks about different contents, each content starts at different time for example:
content_x starts # 0minutes : 0Seconds
content_y starts #10minutes : 0Seconds
..etc
Now I would like to divide these contents according as hyperlink such that if i would like to watch any section, I can just click this link in the respective time (minutes & seconds).
I would prefer to do that in the description part for the YouTube video or in "About Section" so can you guide me how to do that? or any other simpler idea on how to reach different video contents in YouTube in dynamic and descriptive way?
You can append the time you want to the end of the youtube link
Eg
http://www.youtube.com/watch?v=XXX#t=31m08s
where 31m08s represents 31 minutes 8 seconds.
Similarly you can make links for the rest of the sections you want
Check this site : www.youtubetime.com it will generate a Youtube link with a specific starting time. Alternatively, tou can just write your time separated by spaces in a video's description (e.g.
very long description 0:00 part 1 1:00 part 2
etc... or you can write a comment with these time links and use it as an "index".
Hope that I've answered your question.
Right click the video and select Copy video URL at current time.
Then paste it anywhere.
I'm trying to develop an iPhone app that will use the camera to record only the last few minutes/seconds.
For example, you record some movie for 5 minutes click "save", and only the last 30s will be saved. I don't want to actually record five minutes and then chop last 30s (this wont work for me). This idea is called "Loop recording".
This results in an endless video recording, but you remember only last part.
Precorder app do what I want to do. (I want use this feature in other context)
I think this should be easily simulated with a Circular buffer.
I started a project with AVFoundation. It would be awesome if I could somehow redirect video data to a circular buffer (which I will implement). I found information only on how to write it to a file.
I know I can chop video into intervals and save them, but saving it and restarting camera to record another part will take time and it is possible to lose some important moments in the movie.
Any clues how to redirect data from camera would be appreciated.
Important! As of iOS 8 you can use VTCompressionSession and have direct access to the NAL units instead of having to dig through the container.
Well luckily you can do this and I'll tell you how, but you're going to have to get your hands dirty with either the MP4 or MOV container. A helpful resource for this (though, more MOV-specific) is Apple's Quicktime File Format Introduction manual
http://developer.apple.com/library/mac/#documentation/QuickTime/QTFF/QTFFPreface/qtffPreface.html#//apple_ref/doc/uid/TP40000939-CH202-TPXREF101
First thing's first, you're not going to be able to start your saved movie from an arbitrary point 30 seconds before the end of the recording, you'll have to use some I-Frame at approximately 30 seconds. Depending on what your Keyframe Interval is, it may be several seconds before or after that 30 second mark. You could use all I-frames and start from an arbitrary point, but then you'll probably want to re-encode the video afterward because it will be quite large.
SO knowing that, let's move on.
First step is when you set up your AVAssetWriter, you will want to set its AVAssetWriterInput's expectsMediaDataInRealTime property to YES.
In the captureOutput callback you'll be able to do an fread from the file you are writing to. The first fread will get you a little bit of MP4/MOV (whatever format you're using) header (i.e. 'ftyp' atom, 'wide' atom, and the beginning of the 'mdat' atom). You want what's inside the 'mdat' section. So the offset you'll start saving data from will be 36 or so.
Each read will get you 0 or more AVC NAL Units. You can find a listing of NAL unit types from ISO/IEC 14496-10 Table 7-1. They will be in a slightly different format than specified in Annex B, but it's fine. Additionally, there will only be IDR slices and non-IDR slices in the MP4/MOV file. IDR will be the I-Frame you're looking to hang onto.
The NAL unit format in the MP4/MOV container is as follows:
4 bytes - Size
[Size] bytes - NALU Data
data[0] & 0x1F - NALU Type
So now you have the data you're looking for. When you go to save this file, you'll have to update the MPV/MOV container with the correct length, sample count, you'll have to update the 'stsz' atom with the correct sizes for each sample and things like updating the media headers and track headers with the correct duration of the movie and so on. What I would probably recommend doing is creating a sample container on first run that you can more or less just overwrite/augment with the appropriate data for that particular movie. You'll want to do this because the encoders on the various iDevices don't all have the same settings and the 'avcC' atom contains encoder information.
You don't really need to know much about the AVC stream in this case, so you'll probably want to concentrate your experimenting around updating the container format you choose correctly. Good luck.
Description of problem: When i record an audio file for the time interval about 1 hours i get its size around 600MB so its to big I want that it should be compressed to les size but how i don`t know....? Reasong for doing compression is that it takes a lot of time in saving that file when i convert it to the NSData . SO please any help or guide me through any how can get out from that problem.....:(
Any suggestion should be very appreciated...!
Thanks In Advance.....!
You will have to choose correct format of encoding for reducing recorded audio file size.
Please see the link: Record audio on iPhone with smallest file size
If still it is more than the expected size, then you can break file into multiple small chunks.
http://sspbond007.blogspot.in/2012/01/objective-c-filesplitter-and-joiner.html
I use a AFHTTPRequestOperation to upload between 1-6 images to a web server. The weird thing is that when it reports progress in my "setUploadProgressBlock" it reports totalBytesWritten as:
32,768
32,768
32,768
32,768
3,238
2,420
2,420... and keeps repeating 2420 until final chunk which is the remainder.
I'm using a UIProgressView to report upload progress, which jumps to 30% or so immediately because of the unequal chunks in the beginning (32,768 byte chunks). I have cheated this to basically ignore the first four large chunks, but I'm wondering if anyone has an explanation for why it does this, or a more elegant way to handle it. Also, once it reports that all bytes have been written, it sits there "doing nothing" for several seconds which seems like an unreasonably long delay. I've handled this with a UIActivityIndicator (spinner), but it's annoying that the delay is so long. I should mention that this is tested on 3g as that will be the target environment.
Can you double check that you're not reading the value for bytesWritten, which reports how many bytes were uploaded in the last batch, as opposed to totalBytesWritten? Alternatively, it may be that several uploads are being performed simultaneously, which could be confusing if you're logging these all in the same callback.
The "doing nothing" for several seconds could be waiting for the response from the server. Do you have any more details about that?