EfficientNet Failed to convert elements of SparseTensor - machine-learning

i'm trying to train my model using EfficientNetb0 but the error says failed to convert SparseTensor, honestly idk why im just following some tutorialenter image description here(https://i.stack.imgur.com/5a5az.png)enter image description hereenter image description here
how can i see what the required shape and what i have now, as far as i know the input already match but still could work

Related

Check in which Image in a dataset I get the warning libpng warning: iCCP: known incorrect sRGB profile while using OpenCV

I have a dataset, it has 2500 images. To access each of them and apply normalization, I loop over the images:
for path_to_image in dataset:
image = cv2.imread(path_to_image)
# rest of the code follows
While running the code I get this error/warning libpng warning: iCCP: known incorrect sRGB profile while using OpenCV
I tried to use some tricks to get the approximate value of iteration where the image is not being read properly but it did not work.
Is there any way I can obtain the variable path_to_image where I get this error?
What I was thinking is there must be a variable that stores some information about the warning.
Would be grateful for any help.

Any ideas on why my coreml model created with turicreate isn't working?

Pretty much brand new to ML here. I'm trying to create a hand-detection CoreML model using turicreate.
The dataset I'm using is from https://github.com/aurooj/Hand-Segmentation-in-the-Wild , which provides images of hands from an egocentric perspective, along with masks for the images. I'm following the steps in turicreate's "Data Preparation" (https://github.com/apple/turicreate/blob/master/userguide/object_detection/data-preparation.md) step-by-step to create the SFrame. Checking the contents of the variables throughout this process, there doesn't appear to be anything wrong.
Following data preparation, I follow the steps in the "Introductory Example" section of https://github.com/apple/turicreate/tree/master/userguide/object_detection
I get the hint of an error when turicreate is performing iterations to create the model. There doesn't appear to be any Loss at all, which doesn't seem right.
After the model is created, I try to test it with a test_data portion of the SFrame. The results of these predictions are just empty arrays though, which is obviously not right.
After exporting the model as a CoreML .mlmodel and trying it out in an app, it is unable to recognize anything (not surprisingly).
Me being completely new to model creation, I can't figure out what might be wrong. The dataset seems quite accurate to me. The only changes I made to the dataset were that some of the masks didn't have explicit file extensions (they are PNGs), so I added the .png extension. I also renamed the images to follow turicreate's tutorial formats (i.e. vid4frame025.image.png and vid4frame025.mask.0.png. Again, the SFrame creation process using this data seems correct at each step. I was able to follow the process with turicreate's tutorial dataset (bikes and cars) successfully. Any ideas on what might be going wrong?
I found the problem, and it basically stemmed from my unfamiliarity with Python.
In one part of the Data Preparation section, after creating bounding boxes out of the mask images, each annotation is assigned a 'label' indicating the type of object the annotation is meant to be. My data had a different name format than the tutorial's data, so rather than each annotation having 'label': 'bike', my annotations had 'label': 'vid4frame25`, 'label': 'vid4frame26', etc.
Correcting this such that each annotation has 'label': 'hand' seems to have corrected this (or at least it's creating a legitimate-seeming model so far).

How to implement new .mlmodel in this project

I am trying to create a handwritten digit recogniser that uses a core ml model.
I am taking the code from another similar project:
https://github.com/r4ghu/iOS-CoreML-MNIST
But i need to incorporate my ml model into this project.
This is my model:(Input image is 299x299)
https://github.com/LOLIPOP-INTELLIGENCE/createml_handwritten
My question is what changes are to be made in the similar project so that it incorporates my coreml model
I tried changing the shapes to 299x299 but that gives me an error
In viewDidLoad, you should change the number 28 to 299 in the call to CVPixelBufferCreate(). In the original app, the mlmodel expects a 28x28 image but your model uses 299x299 images.
However, there is something else you need to change as well: replace kCVPixelFormatType_OneComponent8 with kCVPixelFormatType_32BGRA or kCVPixelFormatType_32RGBA. The original model uses grayscale images but yours expects color images.
P.S. Next time include the actual error message in your question. That's an important piece of information for people who are trying to answer. :-)

Tesseract does not recognize complete image whereas correctly recognizes part of it?

I have to parse some Lab Reports and I am using Tesseract to extract data from them. I have encountered an issue that Tesseract does not correctly recognize the text if I pass entire page's image. But if I pass a small subsection of page (from Test Report covering the entire table till *****) it is able to read all the text correctly.
In the formal case (when I pass the entire image) it produces some random text output of English words which do not make sense. Part of text is as follows:
Command I ran: tesseract -l eng report.png out
Refierence No : assurcAN, 98941-EU
5:er Nu (SKU) , 95942, 95943
Labelled age gwup “aw
Quamny 20 pweces
Fackagmg pmwosd Yes
Vendor
Manmamurer
But when I pass the subsection, I get accurate results.
What might be the issue here? How do I fix it?
See the sample report image:

How to find matching frame for a prestored reference Image in a Video using OpenCV? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
OpenCV match template not scoring well
I have stored a video file and need to check each frames for matching with already prestored Image file .. I have used cvQueryFrame , from that createImage for frame to image conversion , and checking this image X with prestored image file image Y suing Template matching ... But could not get desired result ? any combination of SURF or SIFT or Line detection algorithms should be used ?
no need to do create image....
IplImage *frame = cvQueryFrame();
will give you the image in IplImage form...template matching is just corelation...it might not give you correct matching always...if you are using OpenCV 2.4.2 there then you can use FREAK,ORB,SIFT,SURF for matching your pre-stored image with your video frame...here is the link
If your "prestored" image is part of the video, you can simply compute the difference (e.g. cv::norm) between the frames and your image.
If you are looking to find an image which is "similar to" a given query image, then it is a very different topic, and there are a lot of academic articles on this subject.

Resources