convert ImageMagick command line argument to Wand - imagemagick

I'm wondering how to convert this working command line sequence for ImageMagick into a Python script using the Wand library:
convert test.gif -fuzz 5% -layers Optimize test5.gif
Python code is:
from wand.api import library
from wand.color import Color
from wand.drawing import Drawing
from wand.image import Image
import ctypes
library.MagickSetImageFuzz.argtypes = (ctypes.c_void_p,
ctypes.c_double)
with Image(filename='test.gif') as img:
library.MagickSetImageFuzz(img.wand, img.quantum_range * 0.05)
with Drawing() as ctx:
ctx(img)
img.optimize_layers()
img.save(filename='test5.gif')
But, I got a different result from the ImageMagick command line.
Why...

This matches the CLI, but results may vary if the gif is animated or previously optimized.
from wand.image import Image
with Image(filename='test.gif') as img:
img.fuzz = img.quantum_range * 0.05
img.optimize_layers()
img.save(filename='test5.gif')

Related

Tesseract Failing on reasonably clear image

I have been trying out Tesseract OCR in combination with Open CV (EMGUCV C#) and I am trying to improve the reliability, one the whole it's been good and by apply various filters one at a time and attempting OCR (Orignal, Bilateral, AdaptiveThreshold, Dilate) I have seem significant improvement.
However...
The following image is being stubborn, despite seeming quite clear to being with, I get no results from Tesseract (orignal image before filters):
In this case I am simply after the 2.57
Instead of using filter on the image, scaling the image did helps on the OCR. Below is the code i tried. sorry i am using linux, i test with python instead of C#
#!/usr/bin/env python3
import argparse
import cv2
import numpy as np
from PIL import Image
import pytesseract
import os
from PIL import Image, ImageDraw, ImageFilter
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
img = cv2.imread(args["image"])
#OCR
barroi = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
scale_percent = 8 # percent of original size
width = int(barroi.shape[1] * scale_percent / 100)
height = int(barroi.shape[0] * scale_percent / 100)
dim = (width, height)
barroi = cv2.resize(barroi, dim, interpolation = cv2.INTER_AREA)
text = pytesseract.image_to_string(barroi, lang='eng', config='--psm 10 --oem 3')
print(str(text))
imageName = "Result.tif"
cv2.imwrite(imageName, img)

How to stitch images by openCV with masks? (Only stitch parts of images)

I was trying to stitch images with OpenCV stitcher.
Some of the feed in images have some black areas left by previous processing and they cause some trouble while stitching.
First image
Second image
Third image
Forth image
And my code is like this:
import numpy as np
import cv2
import os
import glob
import argparse
parser = argparse.ArgumentParser(description='Stitch several images.')
parser.add_argument("-i","--inputfiles", type = str, help = "the input files", nargs='+')
parser.add_argument("-o","--outputfiles", type = str, help = "the output files")
args = parser.parse_args()
print "input: " + str(args.inputfiles)
print "potential output: " + args.outputfiles
stitcher = cv2.createStitcherScans(True)
arrayOfImage = []
for imageName in args.inputfiles:
arrayOfImage.append(cv2.imread(imageName))
stitchingResult = stitcher.stitch(arrayOfImage)
cv2.imwrite(args.outputfiles,stitchingResult[1])
And I use command
python stitchingImages.py -i 1.png 2.png 3.png 4.png -o result.png
to test it.
The result is
Result
I think the problem is the black areas are recognised as strong features. Is there any way to exclude them from the feature detection and blending?

OpenCV hangs when using multiprocessing on a Raspberry Pi

This code runs as expected, and gives the expected output
import multiprocessing
import cv2
import os
path = r"/home/pi/Desktop/calibration.jpg"
image = cv2.imread(path)
def cvtcolor(img):
print "converting to gray ..."
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print "converted to gray"
if True:
p = multiprocessing.Process(name='test',
target=cvtcolor,
kwargs={'img':image}
)
p.start()
p2 = multiprocessing.Process(name='test',
target=cvtcolor,
kwargs={'img':image}
)
p2.start()
outputs:
converting to gray ...
converting to gray ...
converted to gray
converted to gray
However, this code hangs when executed
import multiprocessing
import cv2
import os
path = r"/home/pi/Desktop/calibration.jpg"
image = cv2.imread(path)
def cvtcolor(img):
print "converting to gray ..."
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print "converted to gray"
cvtcolor(image)
if True:
p = multiprocessing.Process(name='test',
target=cvtcolor,
kwargs={'img':image}
)
p.start()
the function executed in the main process proceeds, but the function executed in the "test" process hangs forever
converting to gray ...
converted to gray
converting to gray ...
I am using OpenCV version 3.2.0, installed as detailed here on Raspbian Jessie (raspberry pi)
Does anyone has an explanation / solution for this?
have a look at what is returned. If you try the BGR2GRAY directly, you will get an array with shape attribute same as input image but with only 1 color, e.g. gray. When you run the same function using multiprocessing you do not get an array returned. It will have no shape attribute, try printing the output to see what form it is in, then maybe reconstruct an image from this.

tesseract not able to read all digits accurately

I'm using Tesseract to recognize numbers from images of a screen taken with a phone camera. I've done some preprocessing of the image: processed image, and using Tesseract, I'm able to get some mixed results. Using the following code on the above images, I get the following output: "EOE". However, with this image, processed image, I get an exact match: "39:45.8"
import cv2
import pytesseract
from PIL import Image, ImageEnhance
from matplotlib import pyplot as plt
orig_name = "time3.jpg";
image_name = "time3_.jpg";
img = cv2.imread(orig_name, 0)
img = cv2.medianBlur(img, 5)
img_th = cv2.adaptiveThreshold(img, 255,\
cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY, 11, 2)
cv2.imshow('image', img_th)
cv2.waitKey(0)
cv2.imwrite(image_name, img_th)
im = Image.open(image_name)
time = pytesseract.image_to_string(im, config = "-psm 7")
print(time)
Is there anything I can do to get more consistent results?
I did three additional things to get it correct for the first Image.
You can set a whitelist for Tesseract. In your case we know that
there will only charachters from this List 01234567890.:. This
improves the accuracy significantly.
I resized the image to make it easier for tesseract.
I switched from psm mode 7 to 11 (Recoginze as much as possible)
Code:
import cv2
import pytesseract
from PIL import Image, ImageEnhance
orig_name = "./time1.jpg";
img = cv2.imread(orig_name)
height, width, channels = img.shape
imgResized = cv2.resize(img, ( width*3, height*3))
cv2.imshow("img",imgResized)
cv2.waitKey()
im = Image.fromarray(imgResized)
time = pytesseract.image_to_string(im, config ='--tessdata-dir "/home/rvq/github/tesseract/tessdata/" -c tessedit_char_whitelist=01234567890.: -psm 11 -oem 0')
print(time)
Note:
You can use Image.fromarray(imgResized) to convert an opencv image to a PIL Image. You don't have to write to disk and read it again.

How to read images from Hadoop sequence file using opencv and MrJob?

I created sequence file from tar file full of images with tar-to-seq.jar.Now i want to create images out of bytes from that sequence file and to analyze them. Im using opencv 3.0.0 and mrjob 0.5 version.
Im having troubles to read the image using cv2.imdecode() method and im getting null value
from mrjob.job import MRJob
import os
import sys
import cv2
import numpy as np
class CountLavander(MRJob):
HADOOP_INPUT_FORMAT = 'org.apache.hadoop.mapred.SequenceFileAsTextInputFormat'
def mapper(self, key, value):
imgbytes = np.fromstring(value,dtype='uint8')
imarr = cv2.imdecode(imgbytes, cv2.IMREAD_COLOR)
yield imarr,1
def reducer(self, key, values):
yield key, sum(values)
if __name__ == '__main__':
CountLavander.run()
As a result from running this operation:
python count_lavander.py -r hadoop --hadoop-bin /usr/bin/hadoop
--hadoop-streaming-jar /usr/hdp/2.2.8.0-3150/hadoop-mapreduce/hadoop-
streaming-2.6.0.2.2.8.0-3150.jar
--interpreter /usr/local/bin/python2.7 cor_data.seq
Im getting:
null 2731
I packed 2731 image in that sequence file so i guess that it is packed well, but somehow i cant read them as images.
Anyone has some idea?

Resources