YTPlayerView doesn't always go to full screen - ios

I am trying to show a youtube video but sometimes the YTPlayerView is being displayed in full screen and sometimes not. I want it to be always in full screen. How can I achieve it?
ytView = [[YTPlayerView alloc] initWithFrame:self.view.bounds];
ytView.backgroundColor = self.view.backgroundColor;
ytView.delegate = self;
NSDictionary *playvarsDic = #{ #"controls" : #1, #"playsinline" : #0, #"autohide" : #1, #"showinfo" : #1, #"autoplay": #1, #"modestbranding" : #1 };
[ytView loadWithVideoId:firstImage.Source playerVars: playvarsDic];

Manually set playerView.webView.allowsInlineMediaPlayback = false in playerViewDidBecomeReady to force the player to go into webview.

To play youtube video in full screen you need to add one player property to play video in full screen.
fullscreen property: fs = 1 or 0
Update your dictionary like below:
NSDictionary *playvarsDic = #{ #"fs" : #1,
#"controls" : #1,
#"playsinline" : #0,
#"autohide" : #1,
#"showinfo" : #1,
#"autoplay": #1,
#"modestbranding" : #1 };
fullscreen property: fs
Setting this parameter to 0 prevents the fullscreen button from
displaying in the player. The default value is 1, which causes the
fullscreen button to display.
Check all player properties on this link: https://developers.google.com/youtube/player_parameters?playerVersion=HTML5
Hope this will help you to show your youtube video always in full-screen mode!

Related

Problem setting video frame rate using AVAssetWriter/AVAssetReader

Situation:
I am trying to export video with some parameters like video bit rate, audio bit rate, frame rate, changing video resolution, etc. Note that I am letting the user set the video frame rate in fractions; like user can set the video frame rate say, 23.98.
I use AVAssetWriter and AVAssetReader for this operation. I use AVAssetWriterInputPixelBufferAdaptor for writing the sample buffers.
Everything else works just fine, except the video frame rate.
What I have tried:
Setting the AVAssetWriter.movieTimeScale as suggested here. Which does change the video frame rate but also makes the video sluggish. (gist here)
Setting AVVideoExpectedSourceFrameRateKey. Which does not help. (gist here)
Setting AVAssetWriterInput.mediaTimeScale. Again, it changes the video frame rate but makes the video sluggish as AVAssetWriter.movieTimeScale does. The video shows different frames at some point and sometimes it sticks and resumes again. (gist here)
Using AVAssetReaderVideoCompositionOutput and setting AVMutableVideoComposition.frameDuration; just like SDAVAssetExportSession does. Ironically with SDAVAssetExportSession code, the video is being exported just at the right frame rate that I want, but it just does not work in my code. gist here
I am not sure why it won't work with my code. The issue with this approach is it always returns nil from AVAssetReaderVideoCompositionOutput.copyNextSampleBuffer().
Manually changing the timestamp of the frame with CMSampleTimingInfo, as suggested here Something like:
var sampleTimingInfo = CMSampleTimingInfo()
var sampleBufferToWrite: CMSampleBuffer?
CMSampleBufferGetSampleTimingInfo(vBuffer, at: 0, timingInfoOut: &sampleTimingInfo)
sampleTimingInfo.duration = CMTimeMake(value: 100, timescale: Int32(videoConfig.videoFrameRate * 100))
sampleTimingInfo.presentationTimeStamp = CMTimeAdd(previousPresentationTimeStamp, sampleTimingInfo.duration)
previousPresentationTimeStamp = sampleTimingInfo.presentationTimeStamp
let status = CMSampleBufferCreateCopyWithNewTiming(allocator: kCFAllocatorDefault, sampleBuffer: vBuffer,sampleTimingEntryCount: 1, sampleTimingArray: &sampleTimingInfo, sampleBufferOut: &sampleBufferToWrite)
With this approach, I do get the frame rate set just right, but it increases the video duration (as mentioned in the comment of that question’s answer). I think at some point I may have to discard some frames (if the target frame rate is lower; I need to lower the frame rate in most of the cases).
If I know that if I want 30fps, and my current frame rate is 60fps, it's simple to discard every second frame and setting the SampleBuffer time accordingly.
If I go with this approach(i.e. setting 23.98 fps), how do I decide which frame to discard and if the target frame rate is higher, which frame to duplicate? Reminder: the frame rate could be in fractions.
Here is an idea to select frames. Suppose the fps of source video is F and target fps is TF. rate = TF/F
Initiate a variable n equal to -rate and add rate each time,
when the integer part of n changed, select the frame.
e.g. rate = 0.3
n: -0.3 0 0.3 0.6 0.9 1.2 1.5 1.8 2.1
^ ^ ^
frame index: 0 1 2 3 4 5 6 7
select 0 4 7
float rate = 0.39999f; // TF/F
float n = -rate; // to make sure first frame will be selected
for (int i = 0; i < 100; ++i, n += rate) { // i stands for frame index, take a video with 100 frames as an example
int m = floor(n);
int tmp = n+rate;
// if rate > 1.0 repeat i
// if rate < 1.0 some of the frames will be dropped
for (int j = 0; m+j < tmp; ++j) {
// Use this frame
printf("%d ", i);
}
}
NSMutableDictionary *writerInputParams = [[NSMutableDictionary alloc] init];
[writerInputParams setObject:AVVideoCodecTypeH264 forKey:AVVideoCodecKey];
[writerInputParams setObject:[NSNumber numberWithInt:width] forKey:AVVideoWidthKey];
[writerInputParams setObject:[NSNumber numberWithInt:height] forKey:AVVideoHeightKey];
[writerInputParams setObject:AVVideoScalingModeResizeAspectFill forKey:AVVideoScalingModeKey];
NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init];
[compressionProperties setObject:[NSNumber numberWithInt: 20] forKey:AVVideoExpectedSourceFrameRateKey];
[compressionProperties setObject:[NSNumber numberWithInt: 20] forKey:AVVideoAverageNonDroppableFrameRateKey];
[compressionProperties setObject:[NSNumber numberWithInt: 0.0] forKey:AVVideoMaxKeyFrameIntervalDurationKey];
[compressionProperties setObject:[NSNumber numberWithInt: 1] forKey:AVVideoMaxKeyFrameIntervalKey];
[compressionProperties setObject:[NSNumber numberWithBool:YES] forKey:AVVideoAllowFrameReorderingKey];
[compressionProperties setObject:AVVideoProfileLevelH264BaselineAutoLevel forKey:AVVideoProfileLevelKey];
[writerInputParams setObject:compressionProperties forKey:AVVideoCompressionPropertiesKey];
self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:writerInputParams];
self.assetWriterInput.expectsMediaDataInRealTime = YES;
It has been verified that SCNView refreshes 60 frames per second, but using AVAssetWriter only wants to save 20 frames per second, what should to do?
Neither AVVideoExpectedSourceFrameRateKey nor AVVideoAverageNonDroppableFrameRateKey above will not affect fps, config fps will not work !!!
// Set this to make sure that a functional movie is produced, even if the recording is cut off mid-stream. Only the last second should be lost in that case.
self.videoWriter.movieFragmentInterval = CMTimeMakeWithSeconds(1.0, 1000);
self.videoWriter.shouldOptimizeForNetworkUse = YES;
self.videoWriter.movieTimeScale = 20;
The above configuration will not affect fps either.
self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:writerInputParams];
self.assetWriterInput.expectsMediaDataInRealTime = YES;
/// this config will change video frame presenttime to fit fps, but it will be change video duration.
// self.assetWriterInput.mediaTimeScale = 20;
self.assetWriterInput.mediaTimeScale will affect the fps, but will cause the video duration to be stretched by 3 times, because
BOOL isSUc = [self.writerAdaptor appendPixelBuffer:cvBuffer withPresentationTime:presentationTime]; The time of the filled frame will be re-modified, so the self.assetWriterInput.mediaTimeScale value is configured, which is seriously inconsistent with expectations, and the video duration should not be stretched.
So if you want to control the fps of the video that AVAssetWriter finally saves, you must pass the control, and must make sure call 20 per second.
CMTime presentationTime = CMTimeMake(_writeCount * (1.0/20.0) * 1000, 1000);
BOOL isSUc = [self.writerAdaptor appendPixelBuffer:cvBuffer withPresentationTime:presentationTime];
_writeCount += 1;

How do I control AVAssetWriter to write at the correct FPS

Let me see if I understood it correctly.
At the present most advanced hardware, iOS allows me to record at the following fps: 30, 60, 120 and 240.
But these fps behave differently. If I shoot at 30 or 60 fps, I expect the videos files created from shooting at these fps to play at 30 and 60 fps respectively.
But if I shoot at 120 or 240 fps, I expect the video files creating from shooting at these fps to play at 30 fps, or I will not see the slow motion.
A few questions:
am I right?
is there a way to shoot at 120 or 240 fps and play at 120 and 240 fps respectively? I mean play at the fps the videos were shoot without slo-mo?
How do I control that framerate when I write the file?
I am creating the AVAssetWriter input like this...
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond),
AVVideoMaxKeyFrameIntervalKey : #(1)}
};
_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
and there is no apparent way to control that.
NOTE: I have tried different numbers where that 1 is. I have tried 1.0/fps, I have tried fps and I have removed the key. No difference.
This is how I setup `AVAssetWriter:
AVAssetWriter *newAssetWriter = [[AVAssetWriter alloc] initWithURL:_movieURL fileType:AVFileTypeQuickTimeMovie
error:&error];
_assetWriter = newAssetWriter;
_assetWriter.shouldOptimizeForNetworkUse = NO;
CGFloat videoWidth = size.width;
CGFloat videoHeight = size.height;
NSUInteger numPixels = videoWidth * videoHeight;
NSUInteger bitsPerSecond;
// Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
// if ( numPixels < (640 * 480) )
// bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
// else
NSUInteger bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.
bitsPerSecond = numPixels * bitsPerPixel;
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond)}
};
if (![_assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) {
NSLog(#"Couldn't add asset writer video input.");
return;
}
_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoCompressionSettings
sourceFormatHint:formatDescription];
_assetWriterVideoInput.expectsMediaDataInRealTime = YES;
NSDictionary *adaptorDict = #{
(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferWidthKey : #(videoWidth),
(id)kCVPixelBufferHeightKey : #(videoHeight)
};
_pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:_assetWriterVideoInput
sourcePixelBufferAttributes:adaptorDict];
// Add asset writer input to asset writer
if (![_assetWriter canAddInput:_assetWriterVideoInput]) {
return;
}
[_assetWriter addInput:_assetWriterVideoInput];
captureOutput method is very simple. I get the image from the filter and write it to file using:
if (videoJustStartWriting)
[_assetWriter startSessionAtSourceTime:presentationTime];
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil,
_pixelBufferAdaptor.pixelBufferPool,
&renderedOutputPixelBuffer);
if (err) return; // NSLog(#"Cannot obtain a pixel buffer from the buffer pool");
//_ciContext is a metal context
[_ciContext render:finalImage
toCVPixelBuffer:renderedOutputPixelBuffer
bounds:[finalImage extent]
colorSpace:_sDeviceRgbColorSpace];
[self writeVideoPixelBuffer:renderedOutputPixelBuffer
withInitialTime:presentationTime];
- (void)writeVideoPixelBuffer:(CVPixelBufferRef)pixelBuffer withInitialTime:(CMTime)presentationTime
{
if ( _assetWriter.status == AVAssetWriterStatusUnknown ) {
// If the asset writer status is unknown, implies writing hasn't started yet, hence start writing with start time as the buffer's presentation timestamp
if ([_assetWriter startWriting]) {
[_assetWriter startSessionAtSourceTime:presentationTime];
}
}
if ( _assetWriter.status == AVAssetWriterStatusWriting ) {
// If the asset writer status is writing, append sample buffer to its corresponding asset writer input
if (_assetWriterVideoInput.readyForMoreMediaData) {
if (![_pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime]) {
NSLog(#"error", [_assetWriter.error localizedFailureReason]);
}
}
}
if ( _assetWriter.status == AVAssetWriterStatusFailed ) {
NSLog(#"failed");
}
}
I put the whole thing to shoot at 240 fps. These are presentation times of frames being appended.
time ======= 113594.311510508
time ======= 113594.324011508
time ======= 113594.328178716
time ======= 113594.340679424
time ======= 113594.344846383
if you do some calculation between them you will see that the framerate is about 240 fps. So the frames are being stored with the correct time.
But when I watch the video the movement is not in slow motion and quick time says the video is 30 fps.
Note: this app grabs frames from the camera, the frames goes into CIFilters and the result of those filters is converted back to a sample buffer that is stored to file and displayed on the screen.
I'm reaching here, but I think this is where you're going wrong. Think of your video capture as a pipeline.
(1) Capture buffer -> (2) Do Something With buffer -> (3) Write buffer as frames in video.
Sounds like you've successfully completed (1) and (2), you're getting the buffer fast enough and you're processing them so you can vend them as frames.
The problem is almost certainly in (3) writing the video frames.
https://developer.apple.com/reference/avfoundation/avmutablevideocomposition
Check out the frameDuration setting in your AVMutableComposition, you'll need something like CMTime(1, 60) //60FPS or CMTime(1, 240) // 240FPS to get what you're after (telling the video to WRITE this many frames and encode at this rate).
Using AVAssetWriter, it's exactly the same principle but you set the frame rate as a property in the AVAssetWriterInput outputSettings adding in the AVVideoExpectedSourceFrameRateKey.
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoExpectedSourceFrameRateKey : #(60),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond),
AVVideoMaxKeyFrameIntervalKey : #(1)}
};
To expand a little more - you can't strictly control or sync your camera capture exactly to the output / playback rate, the timing just doesn't work that way and isn't that exact, and of course the processing pipeline adds overhead. When you capture frames they are time stamped, which you've seen, but in the writing / compression phase, it's using only the frames it needs to produce the output specified for the composition.
It goes both ways, you could capture only 30 FPS and write out at 240 FPS, the video would display fine, you'd just have a lot of frames "missing" and being filled in by the algorithm. You can even vend only 1 frame per second and play back at 30FPS, the two are separate from each other (how fast I capture Vs how many frames and what I present per second)
As to how to play it back at different speed, you just need to tweak the playback speed - slow it down as needed.
If you've correctly set the time base (frameDuration), it will always play back "normal" - you're telling it "play back is X Frames Per Second", of course, your eye may notice a difference (almost certainly between low FPS and high FPS), and the screen may not refresh that high (above 60FPS), but regardless the video will be at a "normal" 1X speed for it's timebase. By slowing the video, if my timebase is 120, and I slow it to .5x I know effectively see 60FPS and one second of playback takes two seconds.
You control the playback speed by setting the rate property on AVPlayer https://developer.apple.com/reference/avfoundation/avplayer
The iOS screen refresh is locked at 60fps, so the only way to "see" the extra frames is, as you say, to slow down the playback rate, a.k.a slow motion.
So
yes, you are right
the screen refresh rate (and perhaps limitations of the human visual system, assuming you're human?) means that you cannot perceive 120 & 240fps frame rates. You can play them at normal speed by downsampling to the screen refresh rate. Surely this is what AVPlayer already does, although I'm not sure if that's the answer you're looking for.
you control the framerate of the file when you write it with the CMSampleBuffer presentation timestamps. If your frames are coming from the camera, you're probably passing the timestamps straight through, in which case check that you really are getting the framerate you asked for (a log statement in your capture callback should be enough to verify this). If you're procedurally creating frames, then you choose the presentation timestamps so that they're spaced 1.0/desiredFrameRate seconds apart!
Is 3. not working for you?
p.s. you can discard & ignore AVVideoMaxKeyFrameIntervalKey - it's a quality setting and has nothing to do with playback framerate.

iOS occasionally writes empty video file

I have a Movie class that has an array of UIImage that I write to an H264 file. It works but occasionally it will write a video file that does not contain any content, but the file size is still set to a non-zero size.
Here is the code where I do the writing. I am pretty new to iOS development so this was copied from the internet so I may not fully understand what it is doing. Hopefully someone can suggest a better way of doing this.
frame is a UIImage instance in the loop
func writeAnimationToMovie() {
var error: NSError?
writer.startWriting()
writer.startSessionAtSourceTime(kCMTimeZero)
var buffer: CVPixelBufferRef
var frameCount = 0
for frame in self.photos {
buffer = createPixelBufferFromCGImage(frame.CGImage)
var appendOk = false
var j = 0
while (!appendOk && j < 30) {
if pixelBufferAdaptor.assetWriterInput.readyForMoreMediaData {
let frameTime = CMTimeMake(Int64(frameCount), Int32(fps))
appendOk = pixelBufferAdaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
// appendOk will always be false
NSThread.sleepForTimeInterval(0.05)
} else {
NSThread.sleepForTimeInterval(0.1)
}
j++
}
if (!appendOk) {
println("Doh, frame \(frame) at offset \(frameCount) failed to append")
}
frameCount++
}
input.markAsFinished()
writer.finishWritingWithCompletionHandler({
if self.writer.status == AVAssetWriterStatus.Failed {
println("oh noes, an error: \(self.writer.error.description)")
} else {
let content = NSFileManager.defaultManager().contentsAtPath(self.fileURL.path!)
println("wrote video: \(self.fileURL.path) at size: \(content?.length)")
}
})
}
func createPixelBufferFromCGImage(image: CGImageRef) -> CVPixelBufferRef {
let options = [
"kCVPixelBufferCGImageCompatibilityKey": true,
"kCVPixelBufferCGBitmapContextCompatibilityKey": true
]
let frameSize = CGSizeMake(CGFloat(CGImageGetWidth(image)), CGFloat(CGImageGetHeight(image)))
var pixelBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)
var status:CVReturn = CVPixelBufferCreate(
kCFAllocatorDefault,
UInt(frameSize.width),
UInt(frameSize.height),
OSType(kCVPixelFormatType_32ARGB),
options,
pixelBufferPointer
)
var lockStatus:CVReturn = CVPixelBufferLockBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue(), 0)
var pxData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue())
let bitmapinfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
let rgbColorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
var context:CGContextRef = CGBitmapContextCreate(
pxData,
UInt(frameSize.width),
UInt(frameSize.height),
8,
4 * CGImageGetWidth(image),
rgbColorSpace,
bitmapinfo
)
CGContextDrawImage(context, CGRectMake(0, 0, frameSize.width, frameSize.height), image)
CVPixelBufferUnlockBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue(), 0)
return pixelBufferPointer.memory!.takeUnretainedValue()
}
EDIT The files that do not seem to have any content won't play like the other movies that work. If I view a working file with exiftool this is what I get
$ exiftool 20242697651186-o.mp4
ExifTool Version Number : 9.76
File Name : 20242697651186-o.mp4
Directory : .
File Size : 74 kB
File Modification Date/Time : 2014:12:05 11:07:29-05:00
File Access Date/Time : 2014:12:08 10:12:29-05:00
File Inode Change Date/Time : 2014:12:05 11:07:29-05:00
File Permissions : rw-r--r--
File Type : MP4
MIME Type : video/mp4
Major Brand : MP4 v2 [ISO 14496-14]
Minor Version : 0.0.1
Compatible Brands : mp41, mp42, isom
Movie Data Size : 74741
Movie Data Offset : 44
Movie Header Version : 0
Create Date : 2014:12:05 16:07:32
Modify Date : 2014:12:05 16:07:32
Time Scale : 600
Duration : 0.50 s
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 2
Track Header Version : 0
Track Create Date : 2014:12:05 16:07:32
Track Modify Date : 2014:12:05 16:07:32
Track ID : 1
Track Duration : 0.50 s
Track Layer : 0
Track Volume : 0.00%
Matrix Structure : 1 0 0 0 1 0 0 0 1
Image Width : 640
Image Height : 480
Media Header Version : 0
Media Create Date : 2014:12:05 16:07:32
Media Modify Date : 2014:12:05 16:07:32
Media Time Scale : 600
Media Duration : 0.50 s
Media Language Code : und
Handler Type : Video Track
Handler Description : Core Media Video
Graphics Mode : srcCopy
Op Color : 0 0 0
Compressor ID : avc1
Source Image Width : 640
Source Image Height : 480
X Resolution : 72
Y Resolution : 72
Bit Depth : 24
Video Frame Rate : 8
Avg Bitrate : 1.2 Mbps
Image Size : 640x480
Rotation : 0
And here is a file that doesn't work. It doesn't have all that metadata in it
$ exiftool 20242891987099-o.mp4
ExifTool Version Number : 9.76
File Name : 20242891987099-o.mp4
Directory : .
File Size : 75 kB
File Modification Date/Time : 2014:12:05 11:07:37-05:00
File Access Date/Time : 2014:12:08 10:12:36-05:00
File Inode Change Date/Time : 2014:12:05 11:07:37-05:00
File Permissions : rw-r--r--
File Type : MP4
MIME Type : video/mp4
Major Brand : MP4 v2 [ISO 14496-14]
Minor Version : 0.0.1
Compatible Brands : mp41, mp42, isom
Movie Data Size : 76856
Movie Data Offset : 44
You have a few issues. I'm not sure exactly what you mean by "does not contain any content", but hopefully one of these will help (and if not, you should implement them anyway):
You're blocking the thread with your calls to sleepForTimeInterval(), which could cause the problem. This answer suggests moving the run loop along instead of sleeping, which is a slightly better solution, but the readyForMoreMediaData documentation has an even better suggestion:
This property is observable using key-value observing (see Key-Value Observing Programming Guide). Observers should not assume that they will be notified of changes on a specific thread.
Instead of running a loop and asking if it's available, just get the object to tell you when it's ready for more using KVO.
In your createPixelBufferFromCGImage method, you're not checking for failure at any point. For example, you should handle the possibility that pixelBufferPointer.memory? is nil.
Basically, I'm hypothesizing that one of these things is happening:
j hits 30 and nothing's been written yet because a thread is blocked. In this case you write an empty file with the correct file size.
createPixelBufferFromCGImage is returning unexpected data, which you're then writing to disk

ios: youtube video mute through custom button

My application contains a video with custom UIButtons to manage the video.
By using the Youtube iFrame api I have managed to play the youtube video in UIWebView and hide all its default controls(for fullscreen, volume, etc.).
Now I want to control the video through custom buttons.
How do i do that?
- fullscreen UIButton: to make the video fullscreen
- Mute/Unmute button: to mute/unmute the video
refer the screen from my other question:
objective-c: play video by removing the default fullscreen, etc functionality
How do i solve this? Code for video in UIWebview:
NSString *htmlString =#"<!DOCTYPE html><html> <body><div id=\"player\"></div><script>var tag = document.createElement('script');tag.src = \"https://www.youtube.com/iframe_api\";var firstScriptTag = document.getElementsByTagName('script')[0];firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);var player;function onYouTubeIframeAPIReady() {player = new YT.Player('player', {height: '196',width: '309',videoId: 'GOiIxqcbzyM',playerVars: {playsinline: 1, controls: 0}, events: {'onReady': onPlayerReady,'onStateChange': onPlayerStateChange}});}function onPlayerReady(event) {event.target.playVideo();}var done = false;function onPlayerStateChange(event) {if (event.data == YT.PlayerState.PLAYING && !done) {setTimeout(stopVideo, 6000);done = true;}}function stopVideo() {}</script></body></html>";
_webViewVideo.delegate = self;
static NSString *youTubeVideoHTML = #"<iframe webkit-playsinline width=\"309\" height=\"200\" src=\"https://www.youtube.com/embed/GOiIxqcbzyM?feature=player_detailpage&playsinline=1\" frameborder=\"0\"></iframe>";
[_webViewVideo loadHTMLString:htmlString baseURL:[[NSBundle mainBundle] resourceURL]];
This combination does the video to hide controls
NSDictionary *playerVars = #{
#"controls" : #0,
#"playsinline" : #1,
#"autohide" : #1,
#"showinfo" : #0,
#"modestbranding" : #1,
#"rel":#0
};
To make the video full screen on button click, try saving the current video elapsed time and by changing playerVars to show controls and trigger full screen button action programmatically(the video plays in full screen) and the seek to method to play the video from elapsed time.
For mute/unmute , try reducing the current phone volume programmatically to mute and set it on unmute
i dont think apple provides the api to change the volume.
only user can do it using the hardware controls

Youtube flash player getDuration not accurate

For this video:
http://www.youtube.com/watch?v=3Hn3ISdjdK0
Youtube displays that the duration is 14 seconds and also a call to GData API gives 14 seconds duration.
However using the Youtube API getDuration() , I sometimes get 13.28 seconds
var videoDuration = flashPlayer.getDuration();
Why the discrepancy ?
This is how I construct the flashPlayer:
elements.container.flash({
swf : 'http://www.youtube.com/apiplayer?enablejsapi=1&version=3&start=' + settings.start ,
id : 'video_'+settings.safeID,
height : settings.height,
width : settings.width,
allowScriptAccess:'always',
wmode : 'transparent',
flashvars : {
"video_id" : settings.videoID,
"playerapiid" : settings.safeID
}
});
It seems that YouTube simply rounds it upwards, as it's more correct to say that a 13.28s video is 14 seconds long instead of 13, as it's in fact longer than 13 seconds.

Resources