I'm using vstack to concat 2 images but after concatenation, I have a line between the 2 images. I want to know if there is a proper way to remove that line or another way to create a seamless repeat pattern image by concatenation.
import cv2
[concat image][1]import numpy as np
im1 = cv2.imread('test1.jpeg')
y=0
x=0
h=2000
w=2000
im1 = im1[y:y+h, x:x+w]
concat_image
im_v = cv2.vconcat([im1, im1, im1])
im_v2 = cv2.hconcat([im_v,im_v, im_v])
cv2.imwrite('opencv_vconcat.png', im_v2)
Related
here my images look like this enter image description here
I am trying to stack the images files into one file and also resizing black white images 1000X1000. But I didn't get, I have images with size 600X400, but I need it's to 1000 pixels size, please help me how to do.
Here my images loading:
import cv2
import glob
img= [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack1/*jpg')]
img2= [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack2/*jpg')]
img3= [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack3/*jpg')]
img4= [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack4/*jpg')]
here I am taking all images into one list:
img=img1+img2+img3+img4
Here my resize the images :
im_g=cv2.resize(img,(1000,1000))
--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-69-56a6794f0ec5> in <module>()
----> 1 im_g=cv2.resize(img,(1000,1000))
TypeError: src is not a numpy array, neither a scalar
In your code, img1, img2, img3, img4 are lists. When you use the + operator, they are stacked in the list way.
For example, N images with size (h,w) in each folder (stack1, stack2, ...), the shape of img1 is (N, h, w). However, the shape of img1+img2 is (2N, h, w). Use numpy array instead.
import cv2
import glob
import numpy as np
img1 = np.array([cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack1/*jpg')])
img2 = np.array([cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack2/*jpg')])
img3 = np.array([cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack3/*jpg')])
img4 = np.array([cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack4/*jpg')])
imgs = list(img1+img2+img3+img4)
for img in imgs:
im_g = cv2.resize(img,(1000,1000))
How many files are there in the folder (stack1, stack2, ...)?
In your way of using glob, the result will be multiple files. You have to add one more step that stack files in the folder.
If the desired result is only one file, try this.
import cv2
import glob
import numpy as np
img1 = [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack1/*jpg')]
img2 = [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack2/*jpg')]
img3 = [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack3/*jpg')]
img4 = [cv2.imread(file) for file in glob.glob('C:/Users/NanduCn/jupter1/deepl/challenges-master/ML/stack4/*jpg')]
imgs = (img1+img2+img3+img4)
stacked_img = np.array(img1[0])
for img in imgs[1:]:
stacked_img += np.array(img)
im_g = cv2.resize(stacked_img,(1000,1000))
Note: you may want to normalize(average) the value of the stacked image.
I want to increase / decrease the height of the image for the selected area only (The area between the white lines) as depicted in the image and not the outside of that area.
This is the same functionality which is performed in the app Manly - Body Muscle Editor Pro
How can I achieve that? Any help is appreciated.
I've never written code for IOS but I know OpenCV also works in IOS. Here I use the cv2.resize.
import cv2
import numpy as np
img = cv2.imread("1.jpg")
print(img.shape)
h = img.shape[0]
w = img.shape[1]
part_to_resize = img[120:240,:]
old_height = 120 #240-120
new_height = 200
final_result = np.zeros((h-(240-120)+new_height,w,3),dtype='uint8')
final_result[0:119,:] = img[0:119,:]
final_result[120:320,:] = cv2.resize(part_to_resize, (w, new_height))
final_result[321:h-old_height+new_height,:] = img[241:h,:]
cv2.imshow("final_result", final_result)
cv2.imshow("img", img)
cv2.waitKey()
I have an image that has some text in it. I want to send the image to OCR but the image has some white noise in it so the OCR results aren't that great. I've tried to erode/dilate the image but couldn't get the perfect threshold to work. Since all the text in the images will be perfectly horizontal I tried the Hough Transform.
Here is what the image looks like when I run the sample hough transform program bundled with OpenCV.
Question
How can I black out everything except where the red lines are?
OR How can I crop out a separate images for each of the areas highlighted by the red lines?
I would only like to concentrate on lines that are horizontal, I can discard the diagonal lines.
Either option will work for me when sending to OCR. However, I'd like to try both to see which fetches best results.
howto/s with output
How can I black out everything except where the red lines are?
dotess2()
['Footel text goes he: e\n', 'Some mole hele\n', 'Some Text Here\n']
OR How can I crop out a separate images for each of the areas highlighted by the red lines?
dotess1()
['Foolel text goes he: e\n', 'Some mole hele\n', 'Some Text Here\n', 'Directions\n']
code
# -*- coding: utf-8 -*-
import cv2
import numpy as np
import math
import subprocess
import os
import operator
#some clean up/init blah blah
junk='\/,-‘’“ ”?.\';!{§_~!##$%^&*()_+-|:}»£[]¢€¥°><'
tmpdir='./tmp'
if not os.path.exists(tmpdir):
os.makedirs(tmpdir)
for path, subdirs, files in os.walk(tmpdir):
for name in files:
os.remove(os.path.join(path, name))
#when the preprocessor is not pefect, there will be junk in the result. this is a crude mean of ridding them off
def resfilter(res):
rd = dict()
for l in set(res):
rd[l]=0.
for l in rd:
for i in l:
if i in junk:
rd[l]-=1
elif i.isdigit():
rd[l]+=.5
else:
rd[l]+=1
ret=[]
for v in sorted(rd.iteritems(), key=operator.itemgetter(1), reverse=True):
ret.append(v[0])
return ret
def dotess1():
res =[]
for path, subdirs, files in os.walk(tmpdir):
for name in files:
fpath = os.path.join(path, name)
img = cv2.imread(fpath)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
'''
#if the text is too small/contains noise etc, resize and maintain aspect ratio
if gray.shape[1]<100:
gray=cv2.resize(gray,(int(100/gray.shape[0]*gray.shape[1]),100))
'''
cv2.imwrite('tmp.jpg',gray)
args = ['tesseract.exe','tmp.jpg','tessres','-psm','7', '-l','eng']
subprocess.call(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with open('tessres.txt') as f:
for line in f:
if line.strip() != '':
res.append(line)
print resfilter(res)
def dotess2():
res =[]
args = ['tesseract.exe','clean.jpg','tessres','-psm','3', '-l','eng']
subprocess.call(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with open('tessres.txt') as f:
for line in f:
if line.strip() != '':
res.append(line)
print resfilter(res)
'''
start of code
'''
img = cv2.imread('c:/data/ocr3.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
canny=cv2.Canny(gray,50,200,3)
cv2.imshow('canny',canny)
#remove the actual horizontal lines so that hough wont detect them
linek = np.zeros((11,11),dtype=np.uint8)
linek[5,...]=1
x=cv2.morphologyEx(canny, cv2.MORPH_OPEN, linek ,iterations=1)
canny-=x
cv2.imshow('canny no horizontal lines',canny)
#draw a fat line so that you can box it up
lines = cv2.HoughLinesP(canny, 1, math.pi/2, 50,50, 50, 20)
linemask = np.zeros(gray.shape,gray.dtype)
for line in lines[0]:
if line[1]==line[3]:#check horizontal
pt1 = (line[0],line[1])
pt2 = (line[2],line[3])
cv2.line(linemask, pt1, pt2, (255), 30)
cv2.imshow('linemask',linemask)
'''
* two methods of doing ocr,line mode and page mode
* boxmask is used to so that a clean image can be saved for page mode
* for every detected boxes, the roi are cropped and saved so that tess3 can be run in line mode
'''
boxmask = np.zeros(gray.shape,gray.dtype)
contours,hierarchy = cv2.findContours(linemask,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
idx=0
for cnt in contours:
idx+=1
area = cv2.contourArea(cnt)
x,y,w,h = cv2.boundingRect(cnt)
roi=img[y:y+h,x:x+w].copy()
cv2.imwrite('%s/%s.jpg'%(tmpdir,str(idx)),roi)
cv2.rectangle(boxmask,(x,y),(x+w,y+h),(255),-1)
cv2.imshow('clean',img&cv2.cvtColor(boxmask,cv2.COLOR_GRAY2BGR))
cv2.imwrite('clean.jpg',img&cv2.cvtColor(boxmask,cv2.COLOR_GRAY2BGR))
cv2.imshow('img',img)
dotess1()
dotess2()
cv2.waitKey(0)
from scipy.spatial.distance import seuclidean #imports abridged
import scipy
img = np.asarray(Image.open("testtwo.tif").convert('L'))
img = 1 * (img < 127)
area = (img == 0).sum() # computing white pixel area
print area
areasplit = np.split(img, 24) # splitting image array
print areasplit
for i in areasplit:
result = (i == 0).sum()
print result #computing white pixel area for every single array
minimal = result.min()
maximal = result.max()
dist = seuclidian(minimal, maximal)
print dist
I want to compute distances between array elements, produced from splitting an image. Python can`t recognize the name of a distance functions (I have tried several of them and variuos approaches to importing modules). How to import and call these functions correctly? Thank you
You haven't stated what the error is, but you are using numpy as well and I can't see an import for that
Try
import numpy as np
import scipy
Then try
dist = scipy.spatial.distance.euclidian(minimal, maximal)
dists = scipy.spatial.distance.seuclidian(minimal, maximal, variances)
Note - the standardised euclidean distance takes a third parameter.
I have an image in a numpy array which I save using savefig and then use opencv loadImage function to load the image to a CvMat. But I want to remove this saving the image step.
My Numpy Image size is 25x21, and if I use fromArray function like
im = cv.fromarray(asarray(img))
I get a CvMat of size 25x21 which is very small. But When I save the image to png format and load it back using LoadImage, I get the full sized image of size 429x509.
Can somebody please tell me how do I get this full sized image from numpy array to CvMat? Can I convert the image from numpy array to a png format in code without saving it using savefig()?
This is what I am doing right now.
imgFigure = imshow(zeros((gridM,gridN)),cmap=cm.gray,vmin=VMIN,vmax=5,animated=True,interpolation='nearest',extent=[xmin,xmax,ymin,ymax])
imgFigure.set_data(reshape(img,(gridM,gridN)))
draw()
fileName = '1p_'
fileName += str(counter)
fileName += ".png"
savefig(fileName,bbox_inches='tight',pad_inches=0.01,facecolor='black')
The size of img above is 525 and gridM and gridN are 25 and 21.Then I load this image using:
img = cv.LoadImage(fileName, cv.CV_LOAD_IMAGE_GRAYSCALE)
Now img size is 429x509.
You can just use cv.fromarray() directly upon your numpy array with no need to save inbetween:
import cv
import numpy as np
a = np.arange(0,255,0.0255).reshape(50,200)
b = cv.fromarray(a)
cv.SaveImage('saved.png', b)
print b
#Output:
<cvmat(type=42424006 64FC1 rows=50 cols=200 step=1600 )>
The numpy array becomes a cvmat, and the size is unchanged. This is the saved image: