I want to convert a video file taken with the camera (.mp4) and convert it to an animated GIF image.
I looked up the Apple Docs and there doesn't seem to be any built-in function for this.
How should I approach this task?
You can do it in Some Step
1-
Calculate the required Frame
CMTime vid_length = asset.duration;
float seconds = CMTimeGetSeconds(vid_length);
int required_frames_count = seconds * 12.5; //You can set according
to u
int64_t step = vid_length.value / required_frames_count;
int value = 0;
2-
Make Gif File Setting Property
destination = CGImageDestinationCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path],
kUTTypeGIF,
required_frames_count,
NULL);
frameProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:0.8] forKey:(NSString *)kCGImagePropertyGIFDelayTime]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
gifProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:0] forKey:(NSString *)kCGImagePropertyGIFLoopCount]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
Step 3
Generate Frame from Video Asset AVAssetImageGenerator
for (int i = 0; i < required_frames_count; i++) {
AVAssetImageGenerator *image_generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
image_generator.requestedTimeToleranceAfter = kCMTimeZero;
image_generator.requestedTimeToleranceBefore = kCMTimeZero;
image_generator.appliesPreferredTrackTransform = YES;
image_generator.maximumSize = CGSizeMake(wd, ht); //to get an unscaled image or define a bounding box of 640px, aspect ratio remains
CMTime time = CMTimeMake(value, vid_length.timescale);
CGImageRef image_ref = [image_generator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumb = [UIImage imageWithCGImage:image_ref];
[self mergeFrameForGif:thumb];
CGImageRelease(image_ref);
value += step;
}
NSLog(#"Over all Size of Image(bytes):%ld",t);
CGImageDestinationSetProperties(destination, (CFDictionaryRef)gifProperties);
CGImageDestinationFinalize(destination);
CFRelease(destination);
NSLog(#"animated GIF file created at %#", path);
Step 4
Add Frame in Gif file
- (void)mergeFrameForGif:(UIImage*)pic1
{
CGImageDestinationAddImage(destination, pic1.CGImage, (CFDictionaryRef)frameProperties);
pic1=nil;
}
There is no built-in API for that. I released a library that converts video files to animated GIF images while giving enough flexibility to tweak settings such as frame rate, frame duration, size, etc.
The library is called NSGIF. You can find it here: http://github.com/NSRare/NSGIF
This is the simplest way to convert a video to a GIF:
[NSGIF optimalGIFfromURL:url loopCount:0 completion:^(NSURL *GifURL) {
NSLog(#"Finished generating GIF: %#", GifURL);
}];
Using the optimalGIFfromURL method, automatically generates the GIF based on the optimal settings. There is also room for way more flexibility. Check out the repo for more samples.
Updated for Swift 5.1
import Foundation
import AVFoundation
import PhotosUI
import MobileCoreServices
func makeGIF(asset: AVAsset, destionationURL: URL, width: CGFloat, height: CGFloat) {
let duration = asset.duration
let vid_length : CMTime = duration
let seconds : Double = CMTimeGetSeconds(vid_length)
let tracks = asset.tracks(withMediaType: .video)
let fps = tracks.first?.nominalFrameRate ?? 1.0
let required_frames_count : Int = Int(seconds * Double(fps)) // You can set according
let step : Int64 = vid_length.value / Int64(required_frames_count)
var value : CMTimeValue = CMTimeValue.init(0.0)
let destination = CGImageDestinationCreateWithURL(destionationURL as CFURL, kUTTypeGIF, required_frames_count, nil)
let gifProperties : CFDictionary = [ kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0] ] as CFDictionary
for _ in 0 ..< required_frames_count {
let image_generator : AVAssetImageGenerator = AVAssetImageGenerator.init(asset: asset)
image_generator.requestedTimeToleranceAfter = CMTime.zero
image_generator.requestedTimeToleranceBefore = CMTime.zero
image_generator.appliesPreferredTrackTransform = true
// to get an unscaled image or define a bounding box of 640px, aspect ratio remains
image_generator.maximumSize = CGSize(width: width, height: height)
let time : CMTime = CMTime(value: value, timescale: vid_length.timescale)
do {
let image_ref : CGImage = try image_generator.copyCGImage(at: time, actualTime: nil)
let thumb : UIImage = UIImage.init(cgImage: image_ref)
mergeFrameForGif(frame: thumb, destination: destination!)
value = value + step
} catch {
//
}
}
//print("Overall Size of Image(bytes): \(t)")
CGImageDestinationSetProperties(destination!, gifProperties)
CGImageDestinationFinalize(destination!)
print("animated GIF file created at \(destionationURL)")
}
func mergeFrameForGif(frame: UIImage, destination: CGImageDestination) {
let frameProperties : CFDictionary = [ kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime : 0.8] ] as CFDictionary
CGImageDestinationAddImage(destination, frame.cgImage!, frameProperties)
}
Related
I am new to metal.
I want to make MTLTexture from CVImageBufferRef.
I am using following code sample to do that.
guard
let unwrappedImageTexture = imageTexture,
let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
result == kCVReturnSuccess
else {
throw MetalCameraSessionError.failedToCreateTextureFromImage
}
Here, imageTexture:CVMetalTexture.
Here is my code in Obj C.
CVMetalTextureRef inputTexture;
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
AVAssetReaderTrackOutput track = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:video
outputSettings:#{
(NSString *)kCVPixelBufferMetalCompatibilityKey: #YES,
key:value
}];
sampleBuffer = [track copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
if(kCVReturnSuccess != CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _context.textureCache , imageBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &inputTexture)){
__VMLog(#"Texture Creation Error");
}
id<MTLTexture> it = CVMetalTextureGetTexture(inputTexture); //Returns nil
I am always getting nil on my MTLTexture variable. Even Texture creation error is not happening. but MTLTexture is not generated.
I found out the solution. Seems like it needs an array of id to get MTLTexture.
//Wrong approach
id<MTLTexture> it = CVMetalTextureGetTexture(inputTexture);
//Right approach
id<MTLTexture> it[1];
it[0] = CVMetalTextureGetTexture(inputTexture);
I want to place video as texture to object in OpenGLES 2.0 iOS.
I create AVPlayer with AVPlayerItemVideoOutput, setting
NSDictionary *videoOutputOptions = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
nil];
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:videoOutputOptions];
Than I get CVPixelBufferRef for each moment of time
CMTime currentTime = [self.videoOutput itemTimeForHostTime:CACurrentMediaTime()];
CVPixelBufferRef buffer = [self.videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
Then i convert it to UIImage with this method
+ (UIImage *)imageWithCVPixelBufferUsingUIGraphicsContext:(CVPixelBufferRef)pixelBuffer
{
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *bufferU = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data) {
int maxY = h;
for(int y = 0; y < maxY; y++) {
for(int x = 0; x < w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = bufferU[offset]; // R
data[offset+1] = bufferU[offset+1]; // G
data[offset+2] = bufferU[offset+2]; // B
data[offset+3] = bufferU[offset+3]; // A
}
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFRelease(pixelBuffer);
return image;
}
As result i got required frame from video:
After all i try to update texture with
- (void)setupTextureWithImage:(UIImage *)image
{
if (_texture.name) {
GLuint textureName = _texture.name;
glDeleteTextures(1, &textureName);
}
NSError *error;
_texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error];
if (error) {
NSLog(#"Error during loading texture: %#", error);
}
}
I call this method in GLKView's update method, but as result got black screen, only audio available.
Can anyone explain whats done wrong? Looks like i'm doing something wrong with textures...
The issue is most likely somewhere else then the code you posted. To check the texture itself create a snapshot (a feature in Xcode) and see if you can see the correct texture there. Maybe your coordinates are incorrect or some parameters missing when displaying the textured object, could be you forgot to enable some attributes or the shaders are not present...
Since you got so far I suggest you first try to draw a colored square, then try to apply a texture (not from the video) to it until you get the correct result. Then implement the texture from video.
And just a suggestion since you are getting raw pixel data from the video you should consider creating only one texture and then use texture sub image function to update the texture directly with the data instead of doing some strange iterations and transformations to the image. The glTexSubImage2D will take your buffer pointer directly and do the update.
I try to launch at device - and it's work fine.
Looks like that problem is that simulator not support some operations.
I want to determine the memory size of the image accessed through the PHAsset. This size is so that we know how much memory it occupies on the device. Which method does this?
var imageSize = Float(imageData.length)
var image = UIImage(data: imageData)
var jpegSize = UIImageJPEGRepresentation(image, 1)
var pngSize = UIImagePNGRepresentation(image)
var pixelsMultiplied = asset.pixelHeight * asset.pixelWidth
println("regular data: \(imageSize)\nJPEG Size: \(jpegSize.length)\nPNG Size: \(pngSize.length)\nPixel multiplied: \(pixelsMultiplied)")
Results in:
regular data: 1576960.0
JPEG Size: 4604156
PNG Size: 14005689
Pixel multiplied: 7990272
Which one of these values actually represents the amount it occupies on the device?
After emailing the picture to myself and checking the size on the system, it turns out approach ONE is the closest to the actual size.
To get the size of a PHAsset (Image type), I used the following method:
var asset = self.fetchResults[index] as PHAsset
self.imageManager.requestImageDataForAsset(asset, options: nil) { (data:NSData!, string:String!, orientation:UIImageOrientation, object:[NSObject : AnyObject]!) -> Void in
//transform into image
var image = UIImage(data: data)
//Get bytes size of image
var imageSize = Float(data.length)
//Transform into Megabytes
imageSize = imageSize/(1024*1024)
}
Command + I on my macbook shows the image size as 1,575,062 bytes.
imageSize in my program shows the size at 1,576,960 bytes.
I tested with five other images and the two sizes reported were just as close.
The NSData approach becomes precarious when data is prohibitively large. You can use the below as an alternative:
[[PHImageManager defaultManager] requestAVAssetForVideo:self.phAsset options:nil resultHandler:^(AVAsset *asset, AVAudioMix *audioMix, NSDictionary *info) {
CGFloat rawSize = 0;
if ([asset isKindOfClass:[AVURLAsset class]])
{
AVURLAsset *URLAsset = (AVURLAsset *)asset;
NSNumber *size;
[URLAsset.URL getResourceValue:&size forKey:NSURLFileSizeKey error:nil];
rawSize = [size floatValue] / (1024.0 * 1024.0);
}
else if ([asset isKindOfClass:[AVComposition class]])
{
// Asset is an AVComposition (e.g. slomo video)
float estimatedSize = 0.0;
NSArray *tracks = [self tracks];
for (AVAssetTrack * track in tracks)
{
float rate = [track estimatedDataRate] / 8.0f; // convert bits per second to bytes per second
float seconds = CMTimeGetSeconds([track timeRange].duration);
estimatedSize += seconds * rate;
}
rawSize = estimatedSize;
}
if (completionBlock)
{
NSError *error = info[PHImageErrorKey];
completionBlock(rawSize, error);
}
}];
Or for ALAssets, something like this:
[[[ALAssetsLibrary alloc] init] assetForURL:asset.URL resultBlock:^(ALAsset *asset) {
long long sizeBytes = [[asset defaultRepresentation] size];
if (completionBlock)
{
completionBlock(sizeBytes, nil);
}
} failureBlock:^(NSError *error) {
if (completionBlock)
{
completionBlock(0, error);
}
}];
I have tried to convert PDF Pages to NSImage and save to JPG files successfully. However the output result is as normal 72 DPI. I want to change the DPI to 300 DPI but failed. Below is the code:
- (IBAction)TestButton:(id)sender {
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData *pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep *pdfImg = [NSPDFImageRep imageRepWithData:pdfData];
NSFileManager *fileManager = [NSFileManager defaultManager];
NSInteger pageCount = [pdfImg pageCount];
for(int i = 0 ; i < pageCount ; i++) {
[pdfImg setCurrentPage:i];
NSImage *temp = [[NSImage alloc] init];
[temp addRepresentation:pdfImg];
CGFloat factor = 300/72; // Scale from 72 DPI to 300 DPI
//NSImage *img; // Source image
NSSize newSize = NSMakeSize(temp.size.width*factor, temp.size.height*factor);
NSImage *scaledImg = [[NSImage alloc] initWithSize:newSize];
[scaledImg lockFocus];
[[NSColor whiteColor] set];
[NSBezierPath fillRect:NSMakeRect(0, 0, newSize.width, newSize.height)];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[temp drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[scaledImg unlockFocus];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[temp TIFFRepresentation]];
NSData *finalData = [rep representationUsingType:NSJPEGFileType properties:nil];
NSString *pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImg currentPage]];
[fileManager createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
}
Since OS X 10.8, NSImage has a block based initialiser to draw vector based content into a bitmap.
The idea is to provide a drawing handler that is called whenever a representation of the image is requested.
The relation between points and pixels is expressed by passing a NSSize (in points) to the initialiser and to explicitly set the pixel dimensions for the representation:
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData* pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep* pdfImageRep = [NSPDFImageRep imageRepWithData:pdfData];
CGFloat factor = 300/72;
NSInteger pageCount = [pdfImageRep pageCount];
for(int i = 0 ; i < pageCount ; i++)
{
[pdfImageRep setCurrentPage:i];
NSImage* scaledImage = [NSImage imageWithSize:pdfImageRep.size flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];
NSImageRep* scaledImageRep = [[scaledImage representations] firstObject];
/*
* The sizes of the PDF Image Rep and the [NSImage imageWithSize: drawingHandler:]-context
* are defined in terms of points.
* By explicitly setting the size of the scaled representation in Pixels, you
* define the relation between points & pixels.
*/
scaledImageRep.pixelsWide = pdfImageRep.size.width * factor;
scaledImageRep.pixelsHigh = pdfImageRep.size.height * factor;
NSBitmapImageRep* pngImageRep = [NSBitmapImageRep imageRepWithData:[scaledImage TIFFRepresentation]];
NSData* finalData = [pngImageRep representationUsingType:NSJPEGFileType properties:nil];
NSString* pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImageRep currentPage]];
[[NSFileManager defaultManager] createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
You can set the resolution saved in an image file's metadata by setting the size of the NSImageRep to something other than the image's size
[pngImageRep setSize:NSMakeSize(targetWidth, targetHeight)]
where you have to initialize targetWidth and targetHeight to the values you want
Edit: and I guess you wanted to write "scaledImg" not "temp"
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[scaledImg TIFFRepresentation]];
Edit 2: on second thought this will get you a larger image but only as a stretched out version of the smaller one. The approach in weichsel's answer with the modification below is probably what you really want (but the code above is still valid for setting the metadata)
NSSize newSize = NSMakeSize(pdfImageRep.size.width * factor,pdfImageRep.size.height * factor);
NSImage* scaledImage = [NSImage imageWithSize:newSize flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];
I am trying to create a video file from images given by image magick library. After applying some effects one by one like opacity difference ,it iscreated successfully but the Quick time player gives the error " video file could not be opened. The movie's file format isn't recognized ".
I am using the following code :
double d = 0.00;
- (void)posterizeImageWithCompression:(id)sender {
// Here we use JPEG compression.
NSLog(#"we're using JPEG compression");
MagickWandGenesis();
magick_wand = NewMagickWand();
magick_wand = [self magiWandWithImage:[UIImage imageNamed:#"iphone.png"]];
MagickBooleanType status;
status = MagickSetImageOpacity(magick_wand, d);
if (status == MagickFalse) {
ThrowWandException(magick_wand);
}
if (status == MagickFalse) {
ThrowWandException(magick_wand);
}
size_t my_size;
unsigned char * my_image = MagickGetImageBlob(magick_wand, &my_size);
NSData * data = [[NSData alloc] initWithBytes:my_image length:my_size];
free(my_image);
magick_wand = DestroyMagickWand(magick_wand);
MagickWandTerminus();
UIImage * image = [[UIImage alloc] initWithData:data];
d = d + 0.05;
if (status == MagickFalse) {
ThrowWandException(magick_wand);
}
NSData *data1;
NSArray *paths;
NSString *documentsDirectory,*imagePath ;
UIImage *image1 = image;
paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
documentsDirectory = [paths objectAtIndex:0];
imagePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%f.png",d]];
data1 = UIImagePNGRepresentation(image1);
if (d <= 1.0 ) {
[data1 writeToFile:imagePath atomically:YES];
[imageViewButton setImage:image forState:UIControlStateNormal];
// If ready to have more media data
if (assetWriterPixelBufferAdaptor.assetWriterInput.readyForMoreMediaData) {
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image1 = (CGImageRef) image.CGImage;
// prepare the pixel buffer
CVPixelBufferRef pixelsBuffer = NULL;
// Lock pixel buffer address
CVPixelBufferLockBaseAddress(pixelsBuffer, 0);
// pixelsBuffer = [self pixelBufferFromCGImage:image1];
CVPixelBufferUnlockBaseAddress(pixelsBuffer, 0);
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image1));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image1),
NULL,
NULL,
NULL,
&pixelsBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
}
// Release pixel buffer
CVPixelBufferRelease(pixelsBuffer);
CFRelease(imageData);
}
}
}
it also shows error like...
VideoToolbox`vt_Copy_32BGRA_2vuyITU601 + 91 and
VideoToolbox`vtPixelTransferSession_InvokeBlitter + 574 and
VideoToolbox`VTPixelTransferSessionTransferImage + 14369 and
VideoToolbox`VTCompressionSessionEncodeFrame + 1077 and
MediaToolbox`sbp_vtcs_processSampleBuffer + 599
The QT Player is not being very informative. Usually, players will provide the name of the missing codec. If you generated the file and played it back programmatically on the same computer as you were unsuccessful in QT playing it, then, for whatever reason, the codec the program library could obviously see (because it used it) is not registered with the OS. In the shell, you can do a "file " and get the file type and possibly the codec. If not, you should be able to find the codec with vlc, transcode, or gstreamer and then follow Apple's instructions in downloading and installing the needed codec directly into the OS.