Tesseract has crashed - ios

I am using tesseract on my iOS device and it was working properly until recently it started to crash on me. I have been testing with the same image over and over again and prior to now I had it working about 75 times consecutively. The only thing I can think of is that I deleted the app from my iOS device and then ran it again through Xcode.
I am far from an expert on tesseract and I could really use some advice on what to do next, it would truly be a disappointment for all the hours I put in to go to waste because I cannot read the image anymore. Thank you
This is the crash error it appears to happen when the tesseract file in this method
- (BOOL)recognize
{
int returnCode = _tesseract->Recognize(NULL);// here is where the arrow points on the crash
return (returnCode == 0) ? YES : NO;
}
This is an old question from Alex G and I don't see any answer.
Does anyone find the root cause and solution? Please advice. Many thanks.

I hope you are using AVCaptureSession to take continuously photo and passing to tesseract after some image processing.
So before passing UIImage to tesseract for recognising you should check with this:
CGSize size = [image size];//your image
int width = size.width;
int height = size.height;
if (width < 100 || height < 50) {//UIImage must contain some some size
//Consider as invalid image
return;
}
//This condition is not mandatory.
uint32_t* _pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
if (!_pixels) {
return;
//Consider as invalid image
}

Related

iOS 11.3 ARKit Autofocus and higher resolution

As mentioned here Apple is allowing us to use higher resolution and Autofocus in the apps based on ArKit, have anybody already tried implementing those features in own apps ?
Where apple is usually sharing more technical details about such updates ?
Regards !
You haven’t been able to set the ARCamera image resolution... which is why you may not find anything relating to adjusting the camera image resolution. Changing ARCamera Resolution
If you want to check the ARCamera image resolution you can do so access via the currentFrame.
let currentFrame = sceneView.session.currentFrame
print(currentFrame?.camera.imageResolution)
to date it was set to 1280.0, 720.0
if you want more information about focal-length, which i believe auto-focus may now be able to adjust automatically. You can just check the camera property of currentFrame.
print(currentFrame?.camera)
Ok, autofocus can be added with:
var isAutoFocusEnabled: Bool { get set }
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
Any clues what about higher res ?
I am able to select desired resolution for AR camera using following code -
ARWorldTrackingConfiguration* configuration = [ARWorldTrackingConfiguration new];
NSArray<ARVideoFormat*>* supportedVideoFormats = [ARWorldTrackingConfiguration supportedVideoFormats];
int bestFormatIndex = 0;
for (int i = 0; i < [supportedVideoFormats count]; i++) {
float width = supportedVideoFormats[i].imageResolution.width;
float height = supportedVideoFormats[i].imageResolution.height;
NSLog(#"AR Video Format %f x %f", width, height);
if (width * 9 == height * 16) { // <-- Use your own condition for selecting video format
bestFormatIndex = i;
break;
}
}
[configuration setVideoFormat:supportedVideoFormats[bestFormatIndex]];
// Run the view's session
[_mSceneView.session runWithConfiguration:configuration];
For my requirement I wanted biggest 16:9 ratio. On iPhone 8+, following image resolutions are getting listed -
1920 x 1440
1920 x 1080
1280 x 720
Notice that they are sorted in the supportedVideoFormats array, with biggest resolution at 0 index. And 0th index is the default selected video format.

Scaling Image Causes Crash In AS3 Flex AIR Mobile App

Problem:
Zooming in on image by scaling and moving using matrix causes the app to run out of memory and crash.
Additional Libraries used:
Gestouch - https://github.com/fljot/Gestouch
Description:
In my Flex Mobile app I have an Image inside a Group with pan/zoom enabled using the Gestouch library. The zoom works to an extent but causes the app to die (not freeze, just exit) with no error message after a certain zoom level.
This would be manageable except I can’t figure out how to implement a threshold to stop the zoom at, as it crashes at a different zoom level almost every time. I also use dynamic images so the source of the image could be any size or resolution.
They are usually JPEGS ranging from about 800x600 - 9000x6000 and are downloaded from a server so cannot be packaged with the app.
As of the AS3 docs there is no longer a limit to the size of the BitmapData object so that shouldn't be the issue.
“Starting with AIR 3 and Flash player 11, the size limits for a BitmapData object have been removed. The maximum size of a bitmap is now dependent on the operating system.”
The group is used as a marker layer for overlaying pins on.
The crash mainly happens on iPad Mini and older Android devices.
Things I have tried already tried:
1.Using Adobe Scout to pin point when the memory leak occurs.
2.Debugging to find the exact height and width of the marker layer and image at the time of crash.
3.Setting a max zoom variable based on the size of the image.
4.Cropping the image on zoom to only show the visible area. ( crashes on copyPixels function and BitmapData.draw() function )
5.Using imagemagick to make lower quality images ( small images still crash )
6.Using imagemagick to make very low res image and make a grid of smaller images . Displaying in the mobile app using a List and Tile layout.
7.Using weak references when adding event listeners.
Any suggestions would be appreciated.
Thanks
private function layoutImageResized(e: Event):void
{
markerLayer.scaleX = markerLayer.scaleY = 1;
markerLayer.x = markerLayer.y = 0;
var scale: Number = Math.min(width / image.sourceWidth , height / image.sourceHeight);
image.scaleX = image.scaleY = scale;
_imageIsWide = (image.sourceWidth / image.sourceHeight) > (width / height);
// centre image
if(_imageIsWide)
{
markerLayer.y = (height - image.sourceHeight * image.scaleY ) / 2 ;
}
else
{
markerLayer.x = (width -image.sourceWidth * image.scaleX ) / 2 ;
}
// set max scale
_maxScale = scale*_maxZoom;
}
private function onGesture(event:org.gestouch.events.GestureEvent):void
{
trace("Gesture start");
// if the user starts moving around while the add Pin option is up
// the state will be changed and the menu will disappear
if(currentState == "addPin")
{
return;
}
const gesture:TransformGesture = event.target as TransformGesture;
////trace("gesture state is ", gesture.state);
if(gesture.state == GestureState.BEGAN)
{
currentState = "zooming";
imgOldX = image.x;
imgOldY = image.y;
oldImgWidth = markerLayer.width;
oldImgHeight = markerLayer.height;
if(!_hidePins)
{
showHidePins(false);
}
}
var matrix:Matrix = markerLayer.transform.matrix;
// Pan
matrix.translate(gesture.offsetX, gesture.offsetY);
markerLayer.transform.matrix = matrix;
if ( (gesture.scale != 1 || gesture.rotation != 0) && ( (markerLayer.scaleX < _maxScale && markerLayer.scaleY < _maxScale) || gesture.scale < 1 ) && gesture.scale < 1.4 )
{
storedScale = gesture.scale;
// Zoom
var transformPoint:Point = matrix.transformPoint(markerLayer.globalToLocal(gesture.location));
matrix.translate(-transformPoint.x, -transformPoint.y);
matrix.scale(gesture.scale, gesture.scale);
/** THIS IS WHERE THE CRASH HAPPENS **/
matrix.translate(transformPoint.x, transformPoint.y);
markerLayer.transform.matrix = matrix;
}
}
I would say that's not a good idea to work with such a large image like (9000x6000) on mobile devices.
I suppose you are trying to implement some sort of map navigation so you need to zoom some areas hugely.
My solution would be to split that 9000x6000 into 2048x2048 pieces, then compress it using png2atf utility with mipmaps enabled.
Then you can use Starling to easily load these atf images and add it to stage3d and easily manage it.
In case you are dealing with 9000x6000 image - you'll get about 15 2048x2048 pieces, having them all added on the stage at one time you might think it would be heavy, but mipmaps will make it so that there are only tiny thumbnails of image are in memory until they are not zoomed - so you'll never run out of memory in case you remove invisible pieces from stage from time to time while zooming in, and return it back on zoom out

How can I use Open GL ES to crop image?

So I face some problems when I deal with image cropping. I am awared of two possible ways: UIGraphicsBeginImageContextWithOptions combined with drawAtPoint:blendMode: methods and CGImageCreateWithImageInRect. They both work but have some serious disadvantages:the first way takes a lot of time(in my case 7 secs approx.) and memory(I receive memory warning)(suppose I crop an image taken with iPhone camera); the second ends up with rotated image so that you need to put a bunch code to defeat this behavior which I don't want. What I want to know is how, for instance, apple's built in edit function of "Photos" app works, or Aviary or any other photo editor. Consider apple's editor(iOS 8), you can rotate image,change cropping rectangle,they also have blurring(!) outside the cropping rect and so on, but when you apply cropping it takes max 8 mb of memory and it happens immediately. How do they do this?
The only thought I have is that they use the potential of GPU(Aviary, for instance). So,if we combine all this questions in one, how can I use Open GL to make cropping be less painful operation? I've never worked with it, so any tuts,links and sources are welcome.Thank you in advance.
As already mentioned this should most likely not be done with openGL but even if...
The problem most people have is getting the rectangle in which the image should be displayed and the solution looks something like this:
- (CGRect)fillSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height > target.size.width/target.size.height)
{
// keep target height and make the width larger
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height larger
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
- (CGRect)fitSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height < target.size.width/target.size.height)
{
// keep target height and make the width smaller
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height smaller
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
I did not test this.
Or since you are on iOS simply create an image view with the desired size, add an image to it and then get the screenshot of the image.

loading Animations with RGBA8888 and RGBA4444 showing no difference in memory usage, platform cocos2D & iOS

plateform -> cocos2D, iOS
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA8888"
Shows memory usage in x-code Instruments 10.0 MB.
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA4444"
Shows memory usage in x-code Instruments 10.0 MB.
Question -> why its not showing any difference in Memory Usage while using lower ImageFormat = "RGBA4444" instead of higher ImageFormat = "RGBA8888"?
TexturePacker file size = 2047 * 1348
The default texture format is RGBA8888 so if you have a RGBA4444 texture you need to change the format before loading the texture (and perhaps change it back afterwards).
The method to change texture format for newly created textures is a class method of CCTexture2D:
+ (void) setDefaultAlphaPixelFormat:(CCTexture2DPixelFormat)format;
I found this error cause my memory size same in both format:-http://www.cocos2d-iphone.org/forum/topic/31092.
In CCTexturePVR.m ->
// Not word aligned ?
if( mod != 0 ) {
NSUInteger neededBytes = (4 - mod ) / (bpp/8);
printf("\n");
NSLog(#"cocos2d: WARNING. Current texture size=(%tu,%tu). Convert it to size=(%tu,%tu) in order to save memory", _width, _height, _width + neededBytes, _height );
NSLog(#"cocos2d: WARNING: File: %#", [path lastPathComponent] );
NSLog(#"cocos2d: WARNING: For further info visit: http://www.cocos2d-iphone.org/forum/topic/31092");
printf("\n");
}
its cocos2d or iOS bug which can be handle by adjusting pvr.ccz size
Size dimension should be divisible by 4 but not the Power Of two. it will resolve bug and get expected memory difference for Both Format

mouse handler in opencv for large images, wrong x,y coordinates?

i am using images that are 2048 x 500 and when I use cvShowImage, I only see half the image. This is not a big deal because the interesting part is on the top half of the image. Now, when I use the mouseHandler to get the x,y coordinates of my clicks, I noticed that the coordinate for y (the dimension that doesnt fit in the screen) is wrong.
It seems OpenCV think this is the whole image and recalibrates the coordinate system although we are only effectively showing half the image.
I would need to know how to do 2 things:
- display a resized image that would fit in the screen
get the proper coordinate.
Did anybody encounter similar problems?
Thanks!
Update: it seems the y coordinate is divided by 2 of what it is supposed to be
code:
EXPORT void click_rect(uchar * the_img, int size_x, int size_y, int * points)
{
CvSize size;
size.height = size_y ;
size.width = size_x;
IplImage * img;
img = cvCreateImageHeader(size, IPL_DEPTH_8U, 1);
img->imageData = (char *)the_img;
img->imageDataOrigin = img->imageData;
img1 = cvCreateImage(cvSize((int)((size.width)) , (int)((size.height)) ),IPL_DEPTH_8U, 1);
cvNamedWindow("mainWin",CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin", 100, 100);
cvSetMouseCallback( "mainWin", mouseHandler_rect, NULL );
cvShowImage("mainWin", img1 );
//// wait for a key
cvWaitKey(0);
points[0] = x_1;
points[1] = x_2;
points[2] = y_1;
points[3] = y_2;
//// release the image
cvDestroyWindow("mainWin");
cvReleaseImage(&img1 );
cvReleaseImage(&img);
}
You should create a window with the CV_WINDOW_KEEPRATIO flag instead of the CV_WINDOW_AUTOSIZE flag. This temporarily fixes the problem with your y values being wrong.
I use OpenCV2.1 and visual studio C++ compiler. I fix this problem with another flag CV_WINDOW_NORMAL and work properly and returns correct coordinates, this flag enables you to resize the image window.
cvNamedWindow("Box Example", CV_WINDOW_NORMAL);
I am having the same problem with OpenCV 2.1 using it with Windows and mingw compiler. It took me forever to find out what was wrong. As you describe it, cvSetMouseCallback gets too large y coordinates. This is apparently due to the image and the cvNamedWindow it is shown in being bigger than my screen resolution; thus I cannot see the bottom of the image.
As a solution I resize the images to a fixed size, such that they fit on the screen (in this case with resolution 800x600, which can be any other values:
// g_input_image, g_output_image and g_resized_image are global IplImage* pointers.
int img_w = cvGetSize(g_input_image).width;
int img_h = cvGetSize(g_input_image).height;
// If the height/width ratio is greater than 6/8 resize height to 600.
if (img_h > (img_w*6)/8) {
g_resized_image = cvCreateImage(cvSize((img_w*600)/img_h, 600), 8, 3);
}
// else adjust width to 800.
else {
g_resized_image = cvCreateImage(cvSize(800, (img_h*800)/img_w), 8, 3);
}
cvResize(g_output_image, g_resized_image);
Not a perfect solution, but works for me...
Cheers,
Linus
How are you building the window? You are not passing CV_WINDOW_AUTOSIZE to cvNamedWindow(), are you?
Share some source, #Denis.

Resources