Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
EDIT
below image is the pre-processed sequence on original image.
1. Original Image -> 2. Blur x n times to make qrcode position significant -> 3. crop original image, position extracted from second step using blob -> 4. sharpen and threshold -> 5. check three square for qrcode -> 6. to do additional transformation like rotation -> (final image) (cropped image with resize resolution.)
Old Question
I am trying to reconstruct qrcode from original image. As you can see the photo has damaged qrcode, so I use Aforge library to detect 3 square from image using blob. Now what I don't understand is the logic to generate qrcode from this information. Is it technically possible to reconstruct qrcode with given information?
This is an interesting problem. To answer your question, is this technically possible. Yes it is certainly possible. The QR code in your question encode "5176941.12".
Here's the prepossessed image so that it's easier to manually set the pixels.
After this step, I use excel to set each pixel one by one. After that simply point your phone towards the computer screen. This is what it looks like. If you want the excel sheet, you can get it here.
Now that the question of possibility is out of the way, how to automate it? Without knowing further additional samples, it is difficult to say for sure. However, just based on this sample alone, the simplest approach is simply align a 21x21 grid over your cropped QR image and fill in the values by using a threshold. And then pass this image to your QR decoder. QR code has certain level of redundancy so even if some of the pixels are missing, you will most likely be able to recover the original data.
Edit
Here's some code in python which may serve as a guide to how you might automate this. A few things to note:
I bypass the step of detecting the 3 boxes and manually crop it very tightly. If there are rotations during the capture, you need to fix it.
The threshold 0.6 needs adjusting for different images. Right now, it 'luckily' works even though there are multiple errors. It can be that you might never have a valid qr code if the errors are too extensive.
Code:
import cv2
import numpy as np
def fill3box(qr):
qr[0:7,0:7] = 1
qr[14:21,14:21] = 1
qr[14:21,0:7] = 1
qr[0,0:6]=0
qr[0:6,0]=0
qr[0:6,6]=0
qr[6,0:7]=0
qr[2:5,2:5]=0
qr[14:21,14:21] = qr[0:7,0:7]
qr[14:21,0:7] = qr[0:7,0:7]
return qr
im = cv2.imread('to_process.png')
im = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
im = cv2.resize(im,(210,210))
im = 1-((im - im.min())/(im.max()-im.min())) #normalize and adjust contrast
avg=np.average(im)
qr = np.ones((21,21))
w,h = im.shape[:2]
im_orig = im.copy()
im[im<avg]=0#binarize
im[im>avg]=1
for y in range(21):
for x in range(21):
x1,y1 = (round(x*w/21),round(y*h/21))
x2,y2 = (round(x1+10),round(y1+10))
im_box = im[y1:y2,x1:x2]
if np.average(im_box)<0.6 and qr[y,x]!=0:#0.6 need tweaking
qr[y,x]=0
qr = fill3box(qr) #clean up 3 box areas as they need to be fixed
# debug visualization
for x in range(21):
p1 = (round(x*w/21),0)
p2 = (round(x*w/21),h)
cv2.line(im_orig,p1,p2,(255),1)
for y in range(21):
p1 = (0,round(y*h/21))
p2 = (w,round(y*h/21))
cv2.line(im_orig,p1,p2,(255),1)
qr = cv2.resize(qr,(210,210),interpolation=cv2.INTER_NEAREST)
im = (im*255).astype(np.uint8)
qr= (qr*255).astype(np.uint8)
im_orig= (im_orig*255).astype(np.uint8)
cv2.imwrite('im.png',im)
cv2.imwrite('qr.png',qr)
cv2.imwrite('im_orig.png',im_orig)
Cropped image to_process.png in code.
Grid overlayed to show how this method works
Thresholded image.
Regenerated QR, note that it still works even though there are multiple errors.
This will be difficult.
If you can decode this QR using a reader (I tried but failed), it is possible to re-code it using a writer. But there is no guarantee that the writer will recreate the same, as different encoding options are possible.
If your goal is in fact to be able to decode, you are stuck. Decoding "by hand" might be possible but is lengthy and complicated. You can also consider redrawing the code by hand on a perfect grid, and pass this to a reader.
Related
So, I need a tool that will pick a random color from an image and give me the hex code for it. In specific, from an image of a 4-point gradient. I'm hoping to be able to make it so that I can load any image (by pasting in a link) and using said image, but if I have to code for a specific image edit the code each time I need a different one, that's okay too.
My thought was taking the image and randomizing between the height and width in pixels, and then selecting that pixel and getting the hex code of it, which would then be output. However, being fairly new to coding, I haven't found anything from searching online that would let me do something like this.
I have played around in JS fiddle for a few hours, and I can get it to load an image from the web, but the actual selection of a pixel isn't something I can figure out, although I assume it's possible with so many javascript color-pickers out there.
If there's an easier way to do this with a different type of code, I'm completely open to it, I just need to be pointed in the right direction. Thanks everyone!
Coming back to say I figured this out. What I had been looking for (as far as javascript goes, anyway,) was converting my image to base64 and then using that to get the pixels from. Then it was just a matter of randomizing between image height and image width, and selecting the corresponding pixel.
x = Math.floor(Math.random() * img.width+1);
y = Math.floor(Math.random() * img.height+1);
I'm sure this code isn't amazing, as I relied heavily on other people's code to figure out what I was doing, but in case it helps anyone this is what the end result looks like:
http://jsfiddle.net/AmoraChinchilla/qxh6mr9u/40/
Foreground-extraction
I am extracting a person from its background and I am using cv2.grabcut for that. But sometimes the background pixels are misclassified as foreground hence the extraction is not perfect. I have attached the resultant image. How to improve this extraction?
To improve the extraction you need to play with iterCount and mode parameters.
For instance:
I have the following image:
If I apply the example code:
Can I improve by changing the iterCount?
iterCount=10, 20 (respectively)
iterCount = 30, 40 (respectively)
Can I improve by changing the modes?
mode = GC_INIT_WITH_RECT, GC_INIT_WITH_MASK (respectively)
In my case GC_INIT_WITH_MASK works good, but I said you need to change parameters until the satisfactory result comes out.
I'm trying to read the NIRPP number (social security number) from a French vital card using Tesseract's OCR (I'm using TesseractOCRiOS 4.0.0). So here is what I'm doing :
First, I request a picture of the whole card :
Then, using a custom cropper, I ask the user to zoom specifically on the card number:
And then I catch this image (1291x202px) and using Tesseract I try to read the number:
let tesseract = G8Tesseract(language: "eng")
tesseract?.image = pickedImage
tesseract?.recognize()
print("\(tesseract?.recognizedText ?? "")")
But I'm getting pretty bad results... only like 30% of the time Tesseract is able to find the right number, and among these sometimes I need to trim some characters (like alpha characters, dots, dashes...).
So is there a solution for me to improve these results?
Thanks for your help.
To improve your results :
Zoom your image to appropriate level. Right amount of zoom will improve your accuracy by a lot.
Configure tesseract so that only digits are whitelisted . I am
assuming here what you are trying to extract contains only digits.If
you whitelist only digits then it will improve your chances of
recognizing 0 as 0 and not O character.
If your extracted text matches a regex, you should configure
tesseract to use that regex as well.
Pre process your image to remove any background colors and apply
Morphology effects like erode to increase the space between your
characters/digits. If they are too close , tesseract will have
hard time recognizing them correctly. Most of the image processing
library comes prebuilt with those effects.
Use tiff as image format.
Once you have the right preprocessing pipeline and configuration for tesseract , you will usually get a very good and consistent result.
There are couple of things you need to do it....
1.you need to apply black and white or gray scale on image.
you will use default functionality like Graphics framework or third party libray like openCV or GPUImage for applying black&white or grayscale.
2.and then apply text detection using Vision framework.
From vision text detection you can crop texts according to vision text detected coordinates.
3.pass this cropped images(text detected) to TesseractOCRiOS...
I hope it will work for your use-case.
Thanks
I have a similar issue. I discovered that Tesseract recognizes a text only if the given image contain a region of interest.
I solved the problem using Apple' Vision framework. It has VNDetectTextRectanglesRequest that returns CGRect of detected text according to the image. Then you can crop the image to region where text is present and send them to Tesseract for detection.
Ray Smith says:
Since HP had independently-developed page layout analysis technology that was used in products, (and therefore not released for open-source) Tesseract never needed its own page layout analysis. Tesseract therefore assumes that its input is a binary image with optional polygonal text regions defined.
I'm implementing an application to search a photo in a catalog of textures comparing the histogram.
In order to enhance the accuracy, What processes should I apply to the photo to normalize/clean it before the matching with the catalog?
UPDATE
I added a actual photo made with the Android camera, and the desired match image that it's saved in the catalog.
How can I process the photo to correct colors, enhance and made posible a better match with the catalog.
It really depends on the textures. I think the question to ask is what variation is acceptable and what variation do you want to remove from the catalog. Or put another way, which features do you care about and want to search for?
For example, if color is not important, then a step in normalizing/cleaning would be to convert all to grayscale to remove potential variations. Perhaps a more pertinent example would be that you only want to compare against the strongest edges in the texture so you would blur out the weaker edges.
It all really depends on your specific use case. Consider what you really want to match against and the more specific you can get, the more normalizing and cleaning you can do, and the more accurate your application will be.
Your question should contain example data and your solution attempt. Lets say you want to find how much does image 0.35 compression distort image.
img = Import["http://farm1.staticflickr.com/62/171463865_36ee36f70e.jpg"]
img2 = ImportString#ExportString[img, "JPEG", "CompressionLevel" -> 0.35]
diff = ImageSubtract[Image[img, "Real"], Image[img2, "Real"]]
ArrayPlot[0.5 + 5 ImageData[First#ColorSeparate[diff, "Red"]],
ColorFunction -> "RedGreenSplit", ColorFunctionScaling -> False]
The difference is slight so output difference is amplified 5 times.
Example was done using Mathematica (aka Wolfram language). Original question was about : How can I tell exactly what changed between two images?.
Other then that #Noremac is right - it really depends on your specific use case.
I want to rotate given template image at different angles (eg. 30, 60, 90, ...) and then I want to match the rotated images with a source image to detect objects using opencv functions (I'm writing C code)...
How can I do this using opencv functions? Or is there any other solution?
ya i'd searched SOF and that function is not passing rotated image to the main progrm. . . . .
and the other code given in SOF continuously rotating the image. so using this we cant do teplate matching.
is there any other codes to solve this problem?
Template matching is not a good choice to match rotated targets.
You better check the openCV module Features2D.
You'll want to take a special look at the examples for the Feature Matching and Homography. Both contains the functional source.
For furthers details and a great explanation on the topic you can check Innuendo's answer to a similar question here:
scale and rotation Template matching