GPUImage failed to init ACVFile with data:(null) - ios

First of all, I must say that GPUImage is an excellent framework. However, when loading an ACV file that I export from Photoshop CS6, it gives me an error saying that: failed to init ACVFile with data:(null). The thing is though, that the same code works for some other ACV files, and the file definitely has data, 64 bites of it in fact.
Here is how I am trying to load it:
GPUImageToneCurveFilter *stillImageFilter2 = [[GPUImageToneCurveFilter alloc] initWithACV:#"test"];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:baseImage];
photoImage.image = quickFilteredImage;
If I change test to another ACV file, it works perfectly. Not sure what is wrong.
Thanks
MehtaiPhoneApps

just add the extension of tone curve file test.acv and you are good to go
=> updated code
GPUImageToneCurveFilter *stillImageFilter2 = [[GPUImageToneCurveFilter alloc] initWithACV:#"test.acv"];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:baseImage];
photoImage.image = quickFilteredImage;

Related

How to integrate .traineddata from trainyourtesseract.com?

I have successfully trained my tesseract using Anyline's trainyourtesseract and got a .traineddata file in my email. I might be asking a dumb question here but do you simply drag this to your tessdata folder and cross your fingers and hopefully it works. The are no directions to integrate it. I have seen tutorials that integrate two different languages with a code line like this.
G8Tesseract *operation = [[G8Tesseract alloc] init];
operation.language = #"eng+fra";
So I tried to do the following code but it gave me an cube error.
G8Tesseract *operation = [[G8Tesseract alloc] init];
operation.language = #"eng+arial";
The name of the .traineddata file that I got was arial.traineddata.
Running the code above I tried to implement throws an error
"Cube ERROR (CubeRecoContext::Load): unable to read cube language model params from /var/containers/Bundle/Application/98165164-BA09-40FE-AF82-7CAAE9B77F45/ExWU.app/tessdata/arial.cube.lm
Cube ERROR (CubeRecoContext::Create): unable to init CubeRecoContext object"
Any help would be greatly appreciated!
You could try to initialize G8Tesseract in G8OCREngineModeTesseractOnly and see if this works.
__block G8Tesseract * operation = [[G8Tesseract alloc]initWithLanguage:#"eng+arial" engineMode:G8OCREngineModeTesseractOnly];

adding GPUImage framework to IOS project

The hours are turning into days trying to add the GPUImage framework into an IOS project. Now I've got it working I'm trying the sample filtering live video code from Sunset Lake Software page. The app fails to build with the following red error: ' Use of undeclared 'thresholdfFilter'
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, 768, 1024)];
// problem here
[videoCamera addTarget:thresholdFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
Using Xcode 6.0.1 and testing app on iPad2 with IOS 8.0.2 - If required, I can post screen shots of how I emdedded the framework.
First, the code written in my initial blog post announcing the framework should not be copied to use with the modern version of the framework. That initial post was written over two years ago, and does not reflect the current state of the API. In fact, I've just deleted all of that code from the original post and directed folks to the instructions on the GitHub page which are kept up to date. Thanks for the reminder.
Second, the problem you're describing above is that you're attempting to use a variable called thresholdFilter without ever defining such a variable. That's not a problem with the framework, the compiler has no idea what you're referring to.
Third, the above code won't work for another reason: you're not holding on to your camera instance. You're defining it locally, instead of assigning it to an instance variable of your encapsulating class. This will lead to ARC deallocating that camera as soon as your above setup method is complete, leading to a black screen or to a crash. You need to create an instance variable or property and assign your camera to that, so that a strong reference is made to it.

NSData initWithContentsOfURL reading not all data, but only on device

I am banging my head about an issue I have on iOS7 development. I use the following piece of code to load an image from a webserver:
NSData* data = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:#"http://someServer/someImage.jpg"]];
This works like a charme in simulator, reading exactly the 134185 bytes that the image has. Creating an UIImage from that data works as intended.
Once I test the exact same code on a device (iPad Mini, iOS 7.03), though, it just reads 14920 byte from the same URL. Needless to say that I can't create an UIImage from that data then, creation fails and returns a nil.
The read does not produce any errors (no console output, and also using the signature with the error output param returns nil here). Is there anything I missed around this rather straightforward task? Haven't found anything on the web on this…
Thanks, habitoti
So you don't have any error, and something is downloading. Maybe try to read this response and post here (I guess it is html/text body)?
You can use NSString method:
+ (instancetype)stringWithContentsOfURL:(NSURL )url encoding:(NSStringEncoding)enc error:(NSError *)error;
Can I suggest you use a library like SDWebImage to retrieve your image, it caches it and downloads the images asynchronously.
It also has a category for UIImageView so you can just call [imageView setImageWithURL:]; and it will load the image in when its ready.

Barcode Generation from within iOS App

I want to take a numerical string and generate a simple barcode that can be read by any scanner.
I can already use the camera and read a barcode but now I would like to generate a barcode.
Does anyone know of an sdk that will allow me to do this, resources or code snipets?
Thank you
The only free library to do this is Cocoa-Touch-Barcodes, which is a fork of cocoabarcodes. If you are considering commercial libraries, there is one called iPhone Barcode Generator.
update Check this objective-c port of ZXing: https://github.com/TheLevelUp/ZXingObjC
Include : #import "NKDBarcodeFramework.h" in your Header File and put these lines below in your init function.
barcode = [NKDExtendedCode39Barcode alloc];
barcode = [barcode initWithContent:#"1234567890123" printsCaption:0];
[barcode calculateWidth];
NSLog(#"%#",[barcode description]);
theImage = [UIImage imageFromBarcode:barcode];
subview = [[UIImageView alloc]initWithFrame:TTScreenBounds()];
[subview setImage:theImage];
[self addSubview:subview];
self.frame = self.bounds;
have fun :-)
There are so many barcode types
One D
Two D
Three D
Each barcode type has so many subtypes and each has its own purpose.
I explain how to generate one of the One D barcode type code 39
here i explain how to generate that barcode using Custom font
Steps:
1)Download the custom font from here
2)Attach the file FRE3OF9X.ttf from the downloaded zip
3)add the key Fonts provided by application in info.plist and in item 0 give FRE3OF9X.ttf as value
4)Try the below code snippet
UIFont *fntCode39=[UIFont fontWithName:#"Free3of9Extended" size:30.0];
UILabel *lblBarCodeTest=[[UILabel alloc]initWithFrame:CGRectMake(0,100,768,30)];
[lblBarCodeTest setBackgroundColor:[UIColor lightGrayColor]];
[lblBarCodeTest setTextAlignment:NSTextAlignmentCenter];
[lblBarCodeTest setFont:fntCode39];
[lblBarCodeTest setText:#"*BarCode3Of9_AKA_Code39-ItsA1DBarcode*"];
[self.view addSubview:lblBarCodeTest];
Result:
You can use CoreImage to generate Barcode images. CoreImage contains 4 filters to generate different barcodes: CICode128BarcodeGenerator, CIQRCodeGenerator, CIPDF417BarcodeGenerator, CIAztecCodeGenerator.
I've created a simple class for generating Code 39 Barcode, only one .h and one .m needed to add to your project, and with one line of code it generates the UIImage with code 39 encoded data for you, like this:
UIImage *code39Image = [Code39 code39ImageFromString:#"HELLO CODE39" Width:barcode_width Height:barcode_height];
Here's the link to the project on github:
[https://github.com/bclin087/Simple-Code39-generator-for-iOS.git ]

OpenCV and iPhone

I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded OpenCv static libraries for ARM(iPhone's native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries.
As a first step, I was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searching and I believe its due to the codec not being present. I am trying this on the iPhone simulator. This is what i do:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *anImage = [UIImage imageNamed:#"1.jpg"];
IplImage *img_color = [self CreateIplImageFromUIImage:anImage];
//The image gets created just fine
CvVideoWriter *writer =
cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
25,cvSize(320,480),1);
//writer is always null
int result = cvWriteFrame(writer, img_color);
NSLog(#"\n%d",result);
//hence this is also 0 all the time
cvReleaseVideoWriter(&writer);
}
I am not sure about the second parameter. What sort of codec or what exactly does it do...
I am a n00B at this. Any suggestions?
On *nix flavors, OpenCV uses ffmpeg under the covers to encode video files, so you need to make sure your static libraries are built with ffmpeg support. The second parameter, CV_FOURCC('P','I','M','1'), is the FOURCC code describing the video format/codec you are requesting, in this case the MPEG1 codec. Check out fourcc.org for a complete listing (not all of which work in ffmpeg).

Resources