here is my code
img_original=cv2.imread("sudoku-original.jpg",0)
cv2.imshow("original",img_original)
laplacian = cv2.Laplacian(img_original,cv2.CV_64F)
cv2.imshow("laplace",laplacian)
I want result like in the document but It don't.
here is the link of document:https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html#gradients
laplacian = cv2.Laplacian(img_original,cv2.CV_64F)
The above line implies that the format of the image is CV_64F which is an array of float values. So when you use cv2.imshow() function, it works in a way like: values greater than 1.0 will be white pixels and values lesser than 0.0 will be black.
So you will need to convert it to CV_8U. There are many ways to do it,
I generally use this:
laplacian = cv2.Laplacian(img,cv2.CV_64F)
ret,thresh = cv2.threshold(laplacian,0,255.0,cv2.THRESH_TOZERO)
laplacian8 = np.uint8(laplacian)
cv2.imshow('sud',laplacian8)
This gave me the result:
check this link to learn more about the problem.
Related
I am trying to find different approaches for how to find edges in a pixelated image such as this one:
By edges I mean the clear lines that are showing from the pixels(blocks), not the edges from skin to background etc.
Does anyone got a tips for how to find these edges?
Would a Sobel filter be able to detect these lines as edges?
I have not tested anything yet, I am more looking into options on what type of filters exist.
I will be implementing the stuff in C++ and DirectX12.
There is a large selection of filters.
Result of MATLAB edge function applying different types of filters:
I looks like 'Canny' and 'approxcanny' gives the best result.
According to MATLAB documentation:
The 'Canny' and 'approxcanny' methods are not supported on a GPU.
It probably means that 'Canny' filter is less fitted for GPU implementation.
Here is the MATLAB code:
I = imread('images.jpg'); %Read image.
I = rgb2gray(I); %Convert RGB to Grayscale.
%Name of filters.
filt_name = {'sobel', 'Prewitt', 'Roberts', 'log', 'zerocross', 'Canny', 'approxcanny'};
%Display filtered images
figure('Position', [100, 100, size(I,2)*4, size(I,1)*4]);
for i = 1:length(filt_name)
%Filter I using edge detection filtes of type 'sobel', 'Prewitt', 'Roberts'...
%Use default MATLAB parameters for each filter.
J = edge(I, filt_name{i});
subplot(3, 3, i);
image(im2uint8(J));
colormap('gray');
title(filt_name{i});
axis image;axis off
end
I'm trying to use simple blob detector as described here, however a simplest possible code I've hacked does not seem to yield any results:
img = cv2.imread("detect.png")
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(img)
This code yields empty keypoints array:
[]
The image I'm trying to detects the blobs in is:
I would expect at least 2 blobs to be detected -- according to the documentation simpleblobdetector detects dark blobs and the image does contain 2 of those.
I know it is probably something embarassingly simple I'm missing here, but I cant seem to figure out what it is. My wild guess is, that it has to do something with the circularity of the blob, but trying all kinds of filter parameters I can't seem to figure out the right circularity parameters.
Update:
As per the comment below, where it has been suggested that I should invert my image, despite what the documentations suggests (unless I'm misreading it), I've tried to invert it and run the sample again:
img = cv2.imread("detect.png")
img = cv2.bitwise_not(img)
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(img)
However, as I suspected this gives the same results - no detections:
[]
The problem is the parameters :) and for the bottom blob is too close to the border...
You can take a look to the default parameters in this github link. And an interesting graph at the end of this link where you can check how the different parameters will influence the result.
Basically you can see that by default it is filtered by inertia, area and convexity. Now, if you remove the convexity and inertia filters, it will mark the top one. If you remove the area filter, still it will show only the top blob... The main issue with the bottom one is that it is too close to the border... and seems not to be a "blob" for the detector... but if add a small border to the image, it will appear. Here is the code I used for it:
import cv2
import numpy as np
img = cv2.imread('blob.png');
# create the small border around the image, just bottom
img=cv2.copyMakeBorder(img, top=0, bottom=1, left=0, right=0, borderType= cv2.BORDER_CONSTANT, value=[255,255,255] )
# create the params and deactivate the 3 filters
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = False
params.filterByInertia = False
params.filterByConvexity = False
# detect the blobs
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
# display them
img_with_keypoints = cv2.drawKeypoints(img, keypoints, outImage=np.array([]), color=(0, 0, 255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("Frame", img_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()
and the resulting image:
And yes, you can achieve similar results without deactivating the filters but rather changing the parameters of it. For example, these parameters worked with exactly the same result:
params = cv2.SimpleBlobDetector_Params()
params.maxArea = 100000
params.minInertiaRatio = 0.05
params.minConvexity = .60
detector = cv2.SimpleBlobDetector_create(params)
It will heavily depend on the task at hand, and what are you looking to detect. And then play with the min/max values of each filter.
I have reading an image as a tensor object, which aims to be a mask.
Now, I want to replace values which are close to white (almost 1.0) with 0
and values which are gray to 1.
Then the mask would be correct for my machine learning task.
I have tried it with:
tf.where(imag >= 1.0)
or the next function also returns me the indices
greater = tf.greater_equal(mask, 0.95)
but how to update/assign 0? scatter_nd_add does not work for me.
mask = tf.scatter_nd_add(mask, greater, 0)
Edit:
I tried it differently:
v_mask = tf.Variable(tf.shape(mask))
ind = tf.to_float(mask >= 0.0)
v_mask.assign(ind)
but if I run the session. It stops there and does not go on.
What I really wanna do:
I have a gray image with the dimensions (mxnx1, tensor, float32) and the values are rescaled to from [0,255] to [0,1].
I want to replace all values which are white (1) with 0 and gray (0.45 - 0.55) with 1 and the rest should be undefined.
To threshold your image, you can use:
thim = tf.tofloat(im >= 0.95) # or to whichever type you use
To reassign the result to im, assuming it is a variable:
im_update = im.assign(thim)
This gives you an update op that you need to call for the update to happen.
If im is not a variable, then you cannot reassign values to it. Generally though, cases where you really need to reassign values to a node are scarce.
One workaround I found is to use the numpy() bridge. Do the numpy operations on the numpy array and the same is reflected in the tensor values. This is because, the numpy array and the pytorch tensor use the same underlying memory locations.
Memory sharing is mentioned on the pytorch introductory tutorial here
I tryed to apply to the image the following code in octave:
sq = imread("Square BW.jpg");
figure(1), imshow(Square);
cont1 = edge(sq,"Sobel");
figure(2), imshow(cont1);
The image I get is:
And a similar image appears if I use the Prewitt function. Can anyone explain to me what is happening? The problem is that I can't visualize the process only the result, so I can't understand why the code isn't working.
The problem seems to be how threshold is computed in Octave. You can see how Octave does it by looking at its source by entering type edge at the Octave prompt, or online (I'm not copying the exact code since the code is GPL -- although quite simple)
To get the border, you will need to set the threshold yourself (hopefully, in future versions of Octave's image package this will be fixed but at the moment it's Matlab incompatible since Matlab documentation on their default is unclear).
There's definitely a problem with the way the threshold is computed, however I wasn't able to find the correct value to use in this picture. After many attempts I found this code that seems to work perfectly:
sq = imread("Square BW.jpg");
maskSobel = fspecial("sobel");
mSobel = uint8(zeros(size(BW)));
for i = 0:3
mSobel += imfilter(sq, rot90(maskSobel, i));
end
figure(1), imshow(mSobel);
First we create the Sobel matrix/operator and a zero matrix the same size of the image Square BW. Then we rotate the Sobel matrix four times (by 90 degrees), in order filter the image in all directions (left-right, up-down, right-left and down-up), always adding the result to the mSobel matrix that was created.
Here's the final result:
i have a 128x128 array of elevation data (elevations from -400m to 8000m are displayed using 9 colors) and i need to resize it to 512x512. I did it with bicubic interpolation, but the result looks weird. In the picture you can see original, nearest and bicubic. Note: only the elevation data are interpolated not the colors themselves (gamut is preserved). Are those artifacts seen on the bicubic image result of my bad interpolation code or they are caused by the interpolating of discrete (9 steps) data?
http://i.stack.imgur.com/Qx2cl.png
There must be something wrong with the bicubic code you're using. Here's my result with Python:
The black border around the outside is where the result was outside of the palette due to ringing.
Here's the program that produced the above:
from PIL import Image
im = Image.open(r'c:\temp\temp.png')
# convert the image to a grayscale with 8 values from 10 to 17
levels=((0,0,255),(1,255,0),(255,255,0),(255,0,0),(255,175,175),(255,0,255),(1,255,255),(255,255,255))
img = Image.new('L', im.size)
iml = im.load()
imgl = img.load()
colormap = {}
for i, color in enumerate(levels):
colormap[color] = 10 + i
width, height = im.size
for y in range(height):
for x in range(width):
imgl[x,y] = colormap[iml[x,y]]
# resize using Bicubic and restore the original palette
im4x = img.resize((4*width, 4*height), Image.BICUBIC)
palette = []
for i in range(256):
if 10 <= i < 10+len(levels):
palette.extend(levels[i-10])
else:
palette.extend((i, i, i))
im4x.putpalette(palette)
im4x.save(r'c:\temp\temp3.png')
Edit: Evidently Python's Bicubic isn't the best either. Here's what I was able to do by hand in Paint Shop Pro, using roughly the same procedure as above.
While bicubic interpolation can sometimes generate interpolating values outside the original range (can you verify if this is happening to you?) It really seems like you may have a bug, but it is hard to say without looking at the code. As a general rule the bicubic solution should be smoother than the nearest neighbor solution.
Edit: I take that back, I see no interpolating values outside the original range in your images. Still, I think the strange part is the "jaggedness" you get when using bicubic, you may want to double check that.