How to convert series of images into movie in iOS? - ios

Can anyone tell me how do i convert a series of images into a movie,which i want to store in Camera role.we can use image Animation.
Sample Code Below.
CGRect myImageRect = CGRectMake(0.0f, 0.0f, 768, 1024);
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:myImageRect];
myAnimatedView.animationImages =appDelegate.imageArray;
myAnimatedView.animationDuration = 15.0;
myAnimatedView.animationRepeatCount = 0;
[myAnimatedView startAnimating];``
Now i want to convert this image animation into a movie.which i want to save in
Camera Roll of ipad as a movie(Quick time Movie)
Thank's

This can be done with the AVFoundation framework. For example checkout this post:
Make movie file with picture Array and song file, using AVAsset
As far as I know, using FFMpeg library will be a problem when trying to publish on the app store.

You need to learn and use FFMpeg library.

Related

Tesseract OCR not recognizing the image taken from device

I'm using the https://github.com/gali8/Tesseract-OCR-iOS/ to make an app that detects text on business cards.
I'm stuck at making the Tesseract detect the text in image.
If I pass the image through code, Tesseract is able to detect it. If I provide the image taken from the camera, tesseract is not able to recognize it.
-(void)startTess:(UIImage *)img{
G8Tesseract *tesseract = [[G8Tesseract alloc] initWithLanguage:#"eng"];
tesseract.delegate = self;
tesseract.engineMode=G8OCREngineModeTesseractCubeCombined;
// Optional: Limit the character set Tesseract should try to recognize from
tesseract.charWhitelist = #"#.,()-,abcdefghijklmnopqrstuvwxyz0123456789";
// Specify the image Tesseract should recognize on
tesseract.image = [img g8_blackAndWhite];
// Optional: Limit the area of the image Tesseract should recognize on to a rectangle
CGRect tessRect = CGRectMake(0, 0, img.size.width, img.size.height);
tesseract.rect = tessRect;
// Optional: Limit recognition time with a few seconds
tesseract.maximumRecognitionTime = 4.0;
// Start the recognition
[tesseract recognize];
// Retrieve the recognized text
NSLog(#"text %#", [tesseract recognizedText]);
// You could retrieve more information about recognized text with that methods:
NSArray *characterBoxes = [tesseract recognizedBlocksByIteratorLevel:G8PageIteratorLevelSymbol];
NSArray *paragraphs = [tesseract recognizedBlocksByIteratorLevel:G8PageIteratorLevelParagraph];
NSArray *characterChoices = tesseract.characterChoices;
UIImage *imageWithBlocks = [tesseract imageWithBlocks:characterBoxes drawText:YES thresholded:NO];
self.imgView.image = imageWithBlocks;
NSString * result = [[characterBoxes valueForKey:#"description"] componentsJoinedByString:#"\n"];
_txtView.text=result;
}
Result when image provided from .xcassets:
Result when image taken directly from the camera:
In both the cases, Tesseract is recognizing the empty space with some random characters. I marked that area in both the images (top-left portion of image).
I made sure that image taken from device camera has the orientation up, as some reported Tesseract doesn't recognize the image taken from camera as it has 180 degree shift.
UIImage *chosenImage = info[UIImagePickerControllerOriginalImage];
// Redraw the image (if necessary) so it has the corrent orientation:
if (chosenImage.imageOrientation != UIImageOrientationUp) {
UIGraphicsBeginImageContextWithOptions(chosenImage.size, NO, chosenImage.scale);
[chosenImage drawInRect:(CGRect){0, 0, chosenImage.size}];
chosenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
What is the best way of debugging this and going forward ?
I submitted an issue at git:
https://github.com/gali8/Tesseract-OCR-iOS/issues/358
Edit:
I have changed the iterator level to G8PageIteratorLevelTextline, and now the image taken by device camera gives the following output:
Still it is not accurate. If someone can point out on how to improve this, it would be nice.
On the official github source of tesseract there are various preprocessing methods mentioned and along with those measures I would suggest using .tiff images instead of .jpg or .png because using any other kind of image other than tiff compresses the image and reduces it binarizing quality.

Image Changing CALayer Core Animation

What is the best way of changing the image of my "Sprite" before/during/after an animation?
I display the image:
CALayer *am = [[CALayer alloc] init];
am.contents = img;
am.bounds = CGRectMake(150, 150, 46, 47);
[self.view.layer addSublayer:am];
[am setNeedsDisplay];
Say I have an animation that takes the sprite from location A to Location B, how would I change the image of the layer the second it hits Location B, or leaves Location A?
Thank you!!
You can use sprite sheet or can follow this way of adding into plist file .
http://www.raywenderlich.com/1271/how-to-use-animations-and-sprite-sheets-in-cocos2d
(without cocos2d) If you want to manually give frames then this tutorial is best. Provides with source code too, enjoy . http://mysterycoconut.com/blog/2011/01/cag1/

Luminosity from iOS camera

I'm trying to make an application and i have to calculate the brightness of the camera like this application : http://itunes.apple.com/us/app/megaman-luxmeter/id455660266?mt=8
I found this document : http://b2cloud.com.au/tutorial/obtaining-luminosity-from-an-ios-camera
But i don't know how to adapt it to the camera directly and not an image. Here is my code :
Image = [[UIImagePickerController alloc] init];
Image.delegate = self;
Image.sourceType = UIImagePickerControllerCameraCaptureModeVideo;
Image.showsCameraControls = NO;
[Image setWantsFullScreenLayout:YES];
Image.view.bounds = CGRectMake (0, 0, 320, 480);
[self.view addSubview:Image.view];
NSArray* dayArray = [NSArray arrayWithObjects:Image,nil];
for(NSString* day in dayArray)
{
for(int i=1;i<=2;i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"%#%d.png",day,i]];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"%# (%d) = %f",day,i,totalLuminance);
}
}
Here are the issues :
"Instance method '-rgbaPixels' not found (return type defaults to 'id')"
&
"Incompatible pointer types initializing 'unsigned char *' with an expression of type 'id'"
Thanks a lot ! =)
Rather than doing expensive CPU-bound processing of each pixel in an input video frame, let me suggest an alternative approach. My open source GPUImage framework has a luminosity extractor built into it, which uses GPU-based processing to give live luminosity readings from the video camera.
It's relatively easy to set this up. You simply need to allocate a GPUImageVideoCamera instance to represent the camera, allocate a GPUImageLuminosity filter, and add the latter as a target for the former. If you want to display the camera feed to the screen, create a GPUImageView instance and add that as another target for your GPUImageVideoCamera.
Your luminosity extractor will use a callback block to return luminosity values as they are calculated. This block is set up using code like the following:
[(GPUImageLuminosity *)filter setLuminosityProcessingFinishedBlock:^(CGFloat luminosity, CMTime frameTime) {
// Do something with the luminosity
}];
I describe the inner workings of this luminosity extraction in this answer, if you're curious. This extractor runs in ~6 ms for a 640x480 frame of video on an iPhone 4.
One thing you'll quickly find is that the average luminosity from the iPhone camera is almost always around 50% when automatic exposure is enabled. This means that you'll need to supplement your luminosity measurements with exposure values from the camera metadata to obtain any sort of meaningful brightness measurement.
Why do you place the camera image into an NSArray *dayArray? Five lines later you remove it from that array but treat the object as an NSString. An NSString does not have rgbaPixels. The example you copy-pasted has an array of filenames corresponding to pictures taken at different times of the day. It then opens those image files and performs the analysis of luminosity.
In your case, there is no file to read. Both outer for loops, i.e. on day and i will have to go away. You already got access to the Image provided through the UIImagePickerController. Right after adding the subview, you could in principle access pixels as in unsigned char *pixels = [Image rgbaPixels]; where Image is the image you got from UIImagePickerController.
However, this may not be what you want to do. I imagine that your goal is rather to show the UIImagePickerController in capture mode and then to measure luminosity continuously. To this end, you could turn Image into a member variable, and then access its pixels repeatedly from a timer callback.
You can import below class from GIT to resolve this issue.
https://github.com/maxmuermann/pxl
Add UIImage+Pixels.h & .m files into project. Now try to run.

Using GPUImage With a UIView

I'm trying to integrate GPUImage into my app. Specifically, I want to apply the Sphere Refraction filter on my main view. Thing is, GPUImage works with UIImage, not with UIView. In order to create a UIImage representation of my view hierarchy, I'm using [CALayer renderInContext], which takes a long time to complete. The net result is that my animations look clunky.
Here's the code that's called in my CADisplayLink handler:
- (void)onDisplayLink:(CADisplayLink*)theDisplayLink {
self.mainView.layer.opaque = YES;
UIGraphicsBeginImageContextWithOptions(self.sphereView.bounds.size, self.sphereView.opaque, [[UIScreen mainScreen] scale]);
[self.sphereView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* mainViewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.sourcePicture = [[GPUImagePicture alloc] initWithImage:mainViewImage smoothlyScaleOutput:NO];
self.sphereRefractionFilter = [[GPUImageSphereRefractionFilter alloc] init];
self.sphereRefractionFilter.radius = 0.5;
self.sphereRefractionFilter.refractiveIndex = 0.25;
[self.sphereRefractionFilter setInputRotation:kGPUImageRotate180 atIndex:0];
[self.sphereRefractionFilter addTarget:self.mainView];
[self.sourcePicture addTarget:self.sphereRefractionFilter];
[self.sourcePicture processImage];
}
The view I'm trying to render using this code has a background image, and about 5-50 smaller images laid out on it, whose positions are modified in real-time. Imagine a sphere with multiple moving markers on it in various places.
Using this code, I'm able to render about 10 FPS. Question is: is there any way to do this faster?
Anyone?

How do I display a progressive JPEG in an UIImageView while it is being downloaded?

Downloading an image from the net and showing it in an UIImageView is fairly easy. However, this requires the image to be completely downloaded before it is shown to the user, completely defeating progressive JPEG (and PNG) images.
How can I render the partially downloaded images while the transfer is being done? I would imagine the SDK to have some callback function which would update the image, but I can't find such a function. Is it possible at all with the current iOS SDK?
I know this post has about 1 year, but just in case anyone is looking for it, there is a project called NYXImagesKit that does what you are looking for.
It has a class named NYXProgressiveImageView that is a subclass of UIImageView.
All you have to do is:
NYXProgressiveImageView * imgv = [[NYXProgressiveImageView alloc] init];
imgv.frame = CGRectMake(0, 0, 320, 480);
[imgv loadImageAtURL:[NSURL URLWithString:#"http://yourimage"]];
[self.view addSubview:imgv];
[imgv release];
Also, a good option is to save your images as interlaced so that it loads with low quality and improve with the download. If the image is not interlaced it is loaded from top to bottom.
There is now a small open-source library on top of libjpeg-turbo which allows decoding and displaying progressive JPEGs easily:
let imageView = CCBufferedImageView(frame: ...)
if let url = NSURL(string: "http://example.com/yolo.jpg") {
imageView.load(url)
}
see https://github.com/contentful-labs/Concorde
Did you try to render the UIImage partially as didReceiveData gets called … ? Something like …
-(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data
{
//TODO: add some error handling in case the image could not be created
UIImage *img=[UIImage imageWithData:data];
if (img) {
self.imageView.image=img;
}
}

Resources