How do i get initial screenshot of a video? [duplicate] - ios

This question already has answers here:
Getting a thumbnail from a video url or data in iOS
(12 answers)
Closed 9 years ago.
I have a movie file, what I'm trying to do is show the first screen of this movie in a UIImage on my screen.
The user selects the UIImage/button, and it plays the movie. The issue I'm having is how do i get this initial screenshot.
Best example i can use to demonstrate is using iPhone/iPad, make a video, go to library and you can see a thumbnail showing the initial screen of the video. How can i achieve this.
thanks

To get a framegrab:
AVAssetImageGenerator* generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:destinationAsset];
//Get the 1st frame 3 seconds in
int frameTimeStart = 3;
int frameLocation = 1;
//Snatch a frame
CGImageRef frameRef = [generator copyCGImageAtTime:CMTimeMake(frameTime,frameLocation)
AVAssetImageGenerator* generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:destinationAsset];
//Get the 1st frame 3 seconds in
int frameTimeStart = 3;
int frameLocation = 1;
//Snatch a frame
CGImageRef frameRef = [generator copyCGImageAtTime:CMTimeMake(frameTime,frameLocation)

Related

How to get frame from video on iOS

In my app I want to take frames from a video for filtering them. I try to take frame frim video by time offset. This is my code:
- (UIImage*)getVideoFrameForTime:(NSDate*)time {
CGImageRef thumbnailImageRef = NULL;
NSError *igError = nil;
NSTimeInterval timeinterval = [time timeIntervalSinceDate:self.videoFilterStart];
CMTime atTime = CMTimeMakeWithSeconds(timeinterval, 1000);
thumbnailImageRef =
[self.assetImageGenerator copyCGImageAtTime:atTime
actualTime:NULL
error:&igError];
if (!thumbnailImageRef) {
NSLog(#"thumbnailImageGenerationError %#", igError );
}
UIImage *image = thumbnailImageRef ? [[UIImage alloc] initWithCGImage:thumbnailImageRef] : nil;
return image;
}
Unfortunately, I see only frames which located on integer seconds: 1, 2, 3.. Even when time interval is non-integer (1.5, etc).
How to get frames at any non-integer interval?
Thnx to #shallowThought I found an answer in this question Grab frames from video using Swift
You just need to add this two lines
assetImgGenerate.requestedTimeToleranceAfter = kCMTimeZero;
assetImgGenerate.requestedTimeToleranceBefore = kCMTimeZero;
Use this project to get more frame details
The corresponding project on github: iFrameExtractor.git
If I remember correctly NSDate's accuracy only goes up to the second which explains why frames are only take on integer seconds. You'll have to use a different type of input to get frames at non-integer seconds.

How to create an image with black and white pixels?

I'm trying to create a black and white image that on the one hand can be displayed in an UIImageView and on the other hand it should be possible to share this image via e-mail or just be saved to camera roll.
I've got a two dimensional array (NSArray containing other NSArrays) which contains a matrix with the NSInteger values 0 and 1 (for a white and a black pixel).
So I just want to place a black pixel for a 1 and a white one for a 0.
All other questions I found deal with changing pixels from an existing image.
I hope you can help me!
[EDIT]:
Of course, I didn't want someone to do my work. I couldn't figure out how to create a new image and place the pixels as I wanted to at first, so I tried to edit an existing image that only consists of white pixels and change the color of the pixel if necessary. So my code for trying this, but as you can see I had no idea how the change the pixel. I hope that shows that i was trying on my own.
- (UIImage*) createQRCodeWithText:(NSString*) text andErrorCorrectionLevel:(NSInteger) level {
QRGenerator *qr = [[QRGenerator alloc] init];
NSArray *matrix = [qr createQRCodeMatrixWithText:text andCorrectionLevel:level];
UIImage *image_base = [UIImage imageNamed:#"qr_base.png"];
CGImageRef imageRef = image_base.CGImage;
for (int row = 0; row < [matrix count]; row++) {
for (int column = 0; column < [matrix count]; column++) {
if ([[[matrix objectAtIndex:row] objectAtIndex:column] integerValue] == 1) {
//set pixel (column, row) black
}
}
}
return [[UIImage alloc] initWithCGImage:imageRef];
}
I would create the image from scratch using a CGBitmapContext. You can call CGBitmapContextCreate() to allocate the memory for the image. You can then walk through the pixels just like you're doing now, setting them from your array. When you've finished, you can call CGBitmapContextCreateImage() to make a CGImageRef out of it. If you need a UIImage you can call [+UIImage imageWithCGImage:] and pass it the CGImageRef you created above.

Draw on UIImage to erase and see through, revealing the UIImage layer underneath [duplicate]

This question already has an answer here:
How to erase piece of UIImageView with png-brush and UIBezierPath
(1 answer)
Closed 9 years ago.
I spent the last few days trying to find a way, but I'm stuck.
At a certain point of my application I would like two UIImages overlaid, and when I start "drawing" on the first layer it would erase the image and let us see through, to be able to reveal the content underneath.
I don't know my ways in core graphics, and after spending days on the net I'm wondering if it is possible.
Is there anyone who could help me, or point me to the direction I should follow ?
Any help would be very appreciated.
Have a nice day.
Create one ScratchView class and put your Scratch image in initMethod like:-
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
scratchable = [UIImage imageNamed:#"scratchable.jpg"].CGImage;
width = CGImageGetWidth(scratchable);
height = CGImageGetHeight(scratchable);
self.opaque = NO;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CFMutableDataRef pixels = CFDataCreateMutable( NULL , width * height );
alphaPixels = CGBitmapContextCreate( CFDataGetMutableBytePtr( pixels ) , width , height , 8 , width , colorspace , kCGImageAlphaNone );
provider = CGDataProviderCreateWithCFData(pixels);
CGContextSetFillColorWithColor(alphaPixels, [UIColor blackColor].CGColor);
CGContextFillRect(alphaPixels, frame);
CGContextSetStrokeColorWithColor(alphaPixels, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(alphaPixels, 20.0);
CGContextSetLineCap(alphaPixels, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(width, height, 8, 8, width, provider, nil, NO);
scratched = CGImageCreateWithMask(scratchable, mask);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
return self;
}
and also create second view class for display image of background after Scratch
Bellow this example is much useful to you try with this:-
https://github.com/oyiptong/CGScratch

Zxing zoom functionality in ios AVFoundation

I have been scouring the internet and have been looking high and low for any type of code to help me zoom in on barcodes using ZXing.
I started with the code from their git site here
https://github.com/zxing/zxing
Since then I have been able to increase the default resolution to 1920x1080.
self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
This would be fine but the issue is that I am scanning very small barcodes and even though 1920x1080 would work it doesnt give me any kind of zoom to capture closer to a smaller barcode without losing focus. Now the resolution did help me quite a bit but its simply not close enough.
Im thinking what I need to do is to set the capture session to a scroll view that is 1920x1080 and then set the actual image capture to take from the bounds of my screen so i can zoom in and out of the scroll view itself to achieve a "zoom" kind of affect.
The problem with that is im really not sure where to start...any ideas?
Ok since I have seen this multiple time on here and no one seems to have an answer. I thought I would share my own answer.
There are 2 properties NO ONE seems to know about. Ill cover both.
Now the first one is good for iOS 6+. Apple added a property called setVideoScaleAndCropfactor.
This setting this returns a this is a CGFloat type. The only downfall in this is that you have to set the value to your AVConnection and then set the connection to a stillImageCapture. It will not work with anything else in iOS 6. Now in order to do this you have to set it up to work Asynchronously and you have to loop your code for the decoder to work and take pictures at that scale.
Last thing is that you have to scale your preview layer yourself.
This all sounds like a lot of work. And it really really is. However, This sets your original scan picture to be taken at 1920x1080 or whatever you have it set as. Rather than scaling a current image which will stretch pixels causing the decoder to miss the barcode.
Sp this will look something like this
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[stillImageOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if(error)
return;
NSString *path = [NSString stringWithFormat:#"%#%#",
[[NSBundle mainBundle] resourcePath],
#"/blank.wav"];
SystemSoundID soundID;
NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO];
AudioServicesCreateSystemSoundID(( CFURLRef)filePath, &soundID);
AudioServicesPlaySystemSound(soundID);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t* baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* free_me = 0;
if (true) { // iOS bug?
uint8_t* tmp = baseAddress;
int bytes = bytesPerRow*height;
free_me = baseAddress = (uint8_t*)malloc(bytes);
baseAddress[0] = 0xdb;
memcpy(baseAddress,tmp,bytes);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext =
CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
CGImageRef capture = CGBitmapContextCreateImage(newContext);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
free(free_me);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
Decoder* d = [[Decoder alloc] init];
[self decoding:d withimage:&capture];
}];
}
Now the second one that is coming in iOS 7 that will change EVERYTHING I just said. You have a new property called videoZoomFactor. this is a CGFloat. However it changes everything at the TOP of the stack rather than just affecting like the stillimagecapture.
In Otherwords you wont have to manually zoom your preview layer. You wont have to go through some stillimagecaptureloop and you wont have to set it to an AVConnection. You simply set the CGFloat and it scales everything for you.
Now I know its going to be a while before you can publish iOS 7 applications. So I would seriously consider figuring out how to do this the hard way. Quick tips. I would use a pinch and zoom gesture to set your CGFloat for setvideoscaleandcropfactor. Dont forget to set the value to 1 in your didload and you can scale from there. At the same time in your gesture you can use it to do your CATransaction to scale the preview layer.
Heres a sample of how to do the gesture capture and preview layer
- (IBAction)handlePinchGesture:(UIPinchGestureRecognizer *)recognizer
{
effectiveScale = recognizer.scale;
if (effectiveScale < 1.0)
effectiveScale = 1.0;
if (effectiveScale > 25)
effectiveScale = 25;
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[CATransaction begin];
[CATransaction setAnimationDuration:0];
[prevLayer setAffineTransform:CGAffineTransformMakeScale(effectiveScale, effectiveScale)];
[CATransaction commit];
}
Hope this helps someone out! I may go ahead and just to a video tutorial on this. Depends on what kind of demand there is for it I guess.

How to convert series of images into movie in iOS?

Can anyone tell me how do i convert a series of images into a movie,which i want to store in Camera role.we can use image Animation.
Sample Code Below.
CGRect myImageRect = CGRectMake(0.0f, 0.0f, 768, 1024);
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:myImageRect];
myAnimatedView.animationImages =appDelegate.imageArray;
myAnimatedView.animationDuration = 15.0;
myAnimatedView.animationRepeatCount = 0;
[myAnimatedView startAnimating];``
Now i want to convert this image animation into a movie.which i want to save in
Camera Roll of ipad as a movie(Quick time Movie)
Thank's
This can be done with the AVFoundation framework. For example checkout this post:
Make movie file with picture Array and song file, using AVAsset
As far as I know, using FFMpeg library will be a problem when trying to publish on the app store.
You need to learn and use FFMpeg library.

Resources