PySimpleGUI/tk: drawing OpenCV image into sg.Graph efficiently - opencv

An image captured from a camera is stored into a numpy ndarray object with a shape of (1224,1024,3).
This format is very convenient for using OpenCV methods over it.
I was looking for the way to draw it into (or onto) an sg.Graph element of PySimpleGUI.
The method I have found worked, but was very inefficient:
def draw_img(self, img):
# turn the image into a PIL image object:
pil_im = Image.fromarray(img)
# use PIL to convert the image into an in-memory PNG file
with BytesIO() as output:
pil_im.save(output, format="PNG")
png = output.getvalue()
# remove any previous elements from the canvas of our sg.Graph:
self.image_element.erase()
# add an image into the sg.Graph element
self.image_element.draw_image(data=png, location=(0, self.img_sz[1]))
The reason for being inefficient is clearly because we are encoding the raw image into PNG.
However I could not find any better way to do this! In my case, I had to show every frame coming from the camera, and it was way too slow.
So what is a better way to do it?

I could not find any 'by the book' solution, but I have found a working way.
The idea came from a discussion in this page. With it, I had to copy the original code of draw_image in PySimpleGUI and modify it.
def draw_img(self, img):
# turn our ndarray into a bytesarray of PPM image by adding a simple header:
# this header is good for RGB. for monochrome, use P5 (look for PPM docs)
ppm = ('P6 %d %d 255 ' % (self.img_sz[0], self.img_sz[1])).encode('ascii') + img.tobytes()
# turn that bytesarray into a PhotoImage object:
image = tk.PhotoImage(width=self.img_sz[0], height=self.img_sz[1], data=ppm, format='PPM')
# for first time, create and attach an image object into the canvas of our sg.Graph:
if self.img_id is None:
self.img_id = self.image_element.Widget.create_image((0, 0), image=image, anchor=tk.NW)
# we must mimic the way sg.Graph keeps a track of its added objects:
self.image_element.Images[self.img_id] = image
else:
# we reuse the image object, only changing its content
self.image_element.Widget.itemconfig(self.img_id, image=image)
# we must update this reference too:
self.image_element.Images[self.img_id] = image
Using this method, I could achieve a great performance, so I decided to share this solution with the community. Hoping this will help anybody!

Thank you for helping me. I modified your plan.
my code:
def image_source(path,width=128,height=128):
img = Image.open(path).convert('RGB')#PIL image
image = img.resize((width, height))
ppm = ('P6 %d %d 255 ' % (width, height)).encode('ascii') + image.tobytes()
return ppm
image1=image_source(r'Z:\xxxx6.png')
layout = [.... sg.Image(data=image1) ...

Related

Texture transformation

I am working on eigen transformation - texture to detect object from an image. This work was published in ACCV 2006 page number 71. Full pdf is available on chapter-3 in this pdf https://www.diva-portal.org/smash/get/diva2:275069/FULLTEXT01.pdf. I am not able to follow after getting texture descriptors.
I am working on the attached image. The image size is 9541440.
I took image patches of 3232 and for every patch calculated eigenvalues and got the texture descriptor. After that what to do with these texture descriptors is what I am not able to follow.
Any help to unblock will be really appreciated. Code looks for calculating descriptors looks like below:
descriptors = np.zeros((gray.shape[0]//w, gray.shape[1]//w))
w = 32
for i in range(gray.shape[0]//w):
temp = []
for j in range(gray.shape[1]//w):
sorted_eigen = -np.sort(-np.linalg.eigvals(gray[i*w:
(i+1)*w,j*w:(j+1)*w]))
l = i*w + 13
k = (i+1)*w
theta_svd = (1/(k-l+1))* np.sum([np.abs(val) for val in s[l:k]])
descriptors[i,j] = theta_svd

Filling holes in 3D volume

I am trying to find libraries that can fill small cavity inside a volume as well as a hole that is through the volume like a tube. I had tried SimpleITK but didn't got any success with that. I had tried all the GrayScale Morphological operation there but these holes are not getting filled up.
I would like to know the solution for this problem.
import SimpleITK as sitk
image = sitk.ReadImage("volume.mha")
filt_1 = sitk.GrayscaleFillholeImageFilter()
filt_2 = sitk.GrayscaleMorphologicalClosingImageFilter()
output_1 = filt_1.Execute(image)
output_2 = filt_2.Execute(image)
The filters are being created in such a manner with default parameters and are applied then on the input image.
Thanks and Regards
Vaibhav
A little bit late, however, this answer could be use for the ones dealing with the same problem:
import SimpleITK as sitk
image = sitk.ReadImage("volume.mha")
#be sure that you have a binary or grayscale image
binary_image = sitk.BinaryThreshold(image, 0.5, 1, 1, 0)
dims = segmentation.GetDimension()
# then apply a fillhole algorithm
filledMask = sitk.BinaryFillhole(segmentation)
# since the method before perfoms a dilation, try a erosion afterwards
erodedMask = sitk.BinaryErode(filledMask, [3]*dims)

How do I create a dataset with multiple images the same format as CIFAR10?

I have images 1750*1750 and I would like to label them and put them into a file in the same format as CIFAR10. I have seen a similar answer before that gave an answer:
label = [3]
im = Image.open(img)
im = (np.array(im))
print(im)
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
array = np.array(list(label) + list(r) + list(g) + list(b), np.uint8)
array.tofile("info.bin")
but it doesn't include how to add multiple images in a single file. I have looked at CIFAR10 and tried to append the arrays in the same way, but all I got was the following error:
E tensorflow/core/client/tensor_c_api.cc:485] Read less bytes than requested
Note that I am using Tensorflow to do my computations, and I have been able to isolate the problem from the data.
The CIFAR-10 binary format represents each example as a fixed-length record with the following format:
1-byte label.
1 byte per pixel for the red channel of the image.
1 byte per pixel for the green channel of the image.
1 byte per pixel for the blue channel of the image.
Assuming you have a list of image filenames called images, and a list of integers (less than 256) called labels corresponding to their labels, the following code would write a single file containing these images in CIFAR-10 format:
with open(output_filename, "wb") as f:
for label, img in zip(labels, images):
label = np.array(label, dtype=np.uint8)
f.write(label.tostring()) # Write label.
im = np.array(Image.open(img), dtype=np.uint8)
f.write(im[:, :, 0].tostring()) # Write red channel.
f.write(im[:, :, 1].tostring()) # Write green channel.
f.write(im[:, :, 2].tostring()) # Write blue channel.

OpenCV read image from csv file

I have image in csv file and i want to load it in my program. I found that I can load image from cvs like this:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
cv::namedWindow("img");
cv::imshow("img", img);
I have RGB picture in that file but I got grey picture... Can somebody explain me how to load color image or how can I modify this code to get color image?
Thanks!
Updated
Ok, I don't know how to read your file into OpenCV for the moment, but I can offer you a work-around to get you started. The following will create a header for a PNM format file to match your CSV file and then append your data onto the end and you should end up with a file that you can load.
printf "P3\n284 177\n255\n" > a.pnm # Create PNM header
tr -d ',][' < izlaz.csv >> a.pnm # Append CSV data, after removing commas and []
If I do the above, I can see your bench, tree and river.
If you cannot read that PNM file directly into OpenCV, you can make it into a JPEG with ImageMagick like this:
convert a.pnm a.jpg
I also had a look at the University of Wisconsin ML data archive, that is read with those OpenCV functions that you are using, and the format of their data is different from yours... theirs is like this:
1000025,5,1,1,1,2,1,3,1,1,2
1002945,5,4,4,5,7,10,3,2,1,2
1015425,3,1,1,1,2,2,3,1,1,2
1016277,6,8,8,1,3,4,3,7,1,2
yours looks like this:
[201, 191, 157, 201 ... ]
So maybe this tr command is enough to convert your data:
tr -d '][' < izlaz.csv > TryMe.csv
Original Answer
If you run the following on your CSV file, it translates commas into newlines and then counts the lines:
tr "," "\n" < izlaz.csv | wc -l
And that gives 150,804 lines, which means 150,804 commas in your file and therefore 150,804 integers in your file (+/- 1 or 2). If your greyscale image is 177 rows by 852 columns, you are going to need 150,804 RGB triplets (i.e. 450,000 +/- integers) to represent a colour image, as it is you only have a single greyscale value for each pixel.
The fault is in the way you write the file, not the way you read it.
To see color image I must set number of channels. So this code works for me:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
img= img.reshape(3); //set number of channels

Put some resized images into directory and then specify it to train the images

I have five RGB jpg images. I should to put all these image (converted to grayscale and resized to 160x160) into a directory whose take a place in my work file.
1) I read all the five RGB images :
img1 = imread('image1.jpg');
img2 = imread('image2.jpg');
img3 = imread('image3.jpg');
img4 = imread('image4.jpg');
img5 = imread('image5.jpg');
2) I convert their to grayscale:
img1_gray = rgb2gray(img1);
img2_gray = rgb2gray(img2);
img3_gray = rgb2gray(img3);
img4_gray = rgb2gray(img4);
img5_gray = rgb2gray(img5);
3) I resized all the images to 160x160
img1_gray_resized=imresize(img1_gray, [160 160]);
img2_gray_resized=imresize(img2_gray, [160 160]);
img3_gray_resized=imresize(img3_gray, [160 160]);
img4_gray_resized=imresize(img4_gray, [160 160]);
img5_gray_resized=imresize(img5_gray, [160 160]);
4) i have a directory whose name is 'My_directory', and i need to put all my resized images into it. I used the imwrite function as shown just below, but i get an error and i think that it is completely not correct, that is why i need your help in this case.
imwrite(img1_gray_resized, 'My_directory','jpg');
imwrite(img2_gray_resized, 'My_directory','jpg');
imwrite(img3_gray_resized, 'My_directory','jpg');
imwrite(img4_gray_resized, 'My_directory','jpg');
imwrite(img5_gray_resized, 'My_directory','jpg');
5) In matlab, i need now to specify my directory for training all the images. I used the code below:
Train_images = 'My_directory';
It is not correct. Please how can i specify correctly my directory to train all my images into it ?
Any help will be very very appreciated.
Have another look at the syntax for imwrite. The second argument needs to specify the file name, so you would need to specify the sub-directory and the file name together:
Train_images = 'My_directory';
mkdir(Train_images);
imwrite(img1_gray_resized, fullfile(Train_images,'image1_gray_resized.jpg'));
% and similarly on for the other 4 images
Also note that imwrite infers the format from the file extension.
Side note: If you care about image content, don't use jpg. Use png or bmp or something lossless.

Resources