.nd2 to .tif using fiji bio-formats windowless - imagej

thanks for stopping by.
I am trying to process ~50 images from .nd2 to .TIF, but the exported images are not what I expect and I'm not sure what is wrong. My .nd2's have two channels and I would like the final .TIF to be an image of both channels. However, the .TIF output of my code is an image of just one channel.
setBatchMode(true); //Batch mode processing setting in ImageJ
for (i=0; i<list.length; i++) {
showProgress(i+1, list.length);
filename = list[i];
setBatchMode(true); // prevents image windows from opening while the script is running
run("Bio-Formats Windowless Importer", "open=[" + dir1 + filename +"] autoscale color_mode=Composite rois_import=[ROI manager] view=DataBrowser stack_order=XYCZT");
selectWindow(filename);
run("Stack to RGB", "slices"); // convert to RGB stack
...after this part of code I split the z-stack, find the middle, and save that slice as a .TIFF.
Please let me know if anything isn't clear. Thanks again for reading.

Related

Texture transformation

I am working on eigen transformation - texture to detect object from an image. This work was published in ACCV 2006 page number 71. Full pdf is available on chapter-3 in this pdf https://www.diva-portal.org/smash/get/diva2:275069/FULLTEXT01.pdf. I am not able to follow after getting texture descriptors.
I am working on the attached image. The image size is 9541440.
I took image patches of 3232 and for every patch calculated eigenvalues and got the texture descriptor. After that what to do with these texture descriptors is what I am not able to follow.
Any help to unblock will be really appreciated. Code looks for calculating descriptors looks like below:
descriptors = np.zeros((gray.shape[0]//w, gray.shape[1]//w))
w = 32
for i in range(gray.shape[0]//w):
temp = []
for j in range(gray.shape[1]//w):
sorted_eigen = -np.sort(-np.linalg.eigvals(gray[i*w:
(i+1)*w,j*w:(j+1)*w]))
l = i*w + 13
k = (i+1)*w
theta_svd = (1/(k-l+1))* np.sum([np.abs(val) for val in s[l:k]])
descriptors[i,j] = theta_svd

PySimpleGUI/tk: drawing OpenCV image into sg.Graph efficiently

An image captured from a camera is stored into a numpy ndarray object with a shape of (1224,1024,3).
This format is very convenient for using OpenCV methods over it.
I was looking for the way to draw it into (or onto) an sg.Graph element of PySimpleGUI.
The method I have found worked, but was very inefficient:
def draw_img(self, img):
# turn the image into a PIL image object:
pil_im = Image.fromarray(img)
# use PIL to convert the image into an in-memory PNG file
with BytesIO() as output:
pil_im.save(output, format="PNG")
png = output.getvalue()
# remove any previous elements from the canvas of our sg.Graph:
self.image_element.erase()
# add an image into the sg.Graph element
self.image_element.draw_image(data=png, location=(0, self.img_sz[1]))
The reason for being inefficient is clearly because we are encoding the raw image into PNG.
However I could not find any better way to do this! In my case, I had to show every frame coming from the camera, and it was way too slow.
So what is a better way to do it?
I could not find any 'by the book' solution, but I have found a working way.
The idea came from a discussion in this page. With it, I had to copy the original code of draw_image in PySimpleGUI and modify it.
def draw_img(self, img):
# turn our ndarray into a bytesarray of PPM image by adding a simple header:
# this header is good for RGB. for monochrome, use P5 (look for PPM docs)
ppm = ('P6 %d %d 255 ' % (self.img_sz[0], self.img_sz[1])).encode('ascii') + img.tobytes()
# turn that bytesarray into a PhotoImage object:
image = tk.PhotoImage(width=self.img_sz[0], height=self.img_sz[1], data=ppm, format='PPM')
# for first time, create and attach an image object into the canvas of our sg.Graph:
if self.img_id is None:
self.img_id = self.image_element.Widget.create_image((0, 0), image=image, anchor=tk.NW)
# we must mimic the way sg.Graph keeps a track of its added objects:
self.image_element.Images[self.img_id] = image
else:
# we reuse the image object, only changing its content
self.image_element.Widget.itemconfig(self.img_id, image=image)
# we must update this reference too:
self.image_element.Images[self.img_id] = image
Using this method, I could achieve a great performance, so I decided to share this solution with the community. Hoping this will help anybody!
Thank you for helping me. I modified your plan.
my code:
def image_source(path,width=128,height=128):
img = Image.open(path).convert('RGB')#PIL image
image = img.resize((width, height))
ppm = ('P6 %d %d 255 ' % (width, height)).encode('ascii') + image.tobytes()
return ppm
image1=image_source(r'Z:\xxxx6.png')
layout = [.... sg.Image(data=image1) ...

Python parallelization for code to combine multiple images

I am new to Python and am trying to parallelize a program that I somehow pieced together from the internet. The program reads all image files (usually multiple series of images such as abc001,abc002...abc015 and xyz001,xyz002....xyz015) in a specific folder and then combines images in a specified range. Most times, the number of files exceeds 10000, and my latest case requires me to combine 24000 images. Could someone help me with:
Taking 2 sets of images from different directories. Currently I have to move these images into 1 directory and then work in said directory.
Reading only specified files. Currently my program reads all files, saves names in an array (I think it's an array. Could be a directory also) and then uses only the images required to combine. If I specify a range of files, it still checks against all files in the directory and takes a lot of time.
Parallel Processing - I work with usually 10k files or sometimes more. These are images saved from the fluid simulations that I run at specific times. Currently, I save about 2k files at a time in separate folders and run the program to combine these 2000 files at one time. And then I copy all the output files to a separate folder to keep them together. It would be great if I could use all 16 cores on the processor to combine all files in 1 go.
Image series 1 is like so.
Consider it to be a series of photos of the cat walking towards the camera. Each frame is is suffixed with 001,002,...,n.
Image series 1 is like so.
Consider it to be a series of photos of the cat's expression changing with each frame. Each frame is is suffixed with 001,002,...,n.
The code currently combines each frame from set1 and set2 to provide output.png as shown in the link here.
import sys
import os
from PIL import Image
keywords=input('Enter initial characters of image series 1 [Ex:Scalar_ , VoF_Scene_]:\n')
keywords2=input('Enter initial characters of image series 2 [Ex:Scalar_ , VoF_Scene_]:\n')
directory = input('Enter correct folder name where images are present :\n') # FOLDER WHERE IMAGES ARE LOCATED
result1 = {}
result2={}
name_count1=0
name_count2=0
for filename in os.listdir(directory):
if keywords in filename:
name_count1 +=1
result1[name_count1] = os.path.join(directory, filename)
if keywords2 in filename:
name_count2 +=1
result2[name_count2] = os.path.join(directory, filename)
num1=input('Enter initial number of series:\n')
num2=input('Enter final number of series:\n')
num1=int(num1)
num2=int(num2)
if name_count1==(num2-num1+1):
a1=1
a2=name_count1
elif name_count2==(num2-num1+1):
a1=1
a2=name_count2
else:
a1=num1
a2=num2+1
for x in range(a1,a2):
y=format(x,'05') # '05' signifies number of digits in the series of file name Ex: [Scalar_scene_1_00345.png --> 5 digits], [Temperature_section_2_951.jpg --> 3 digits]. Change accordingly
y=str(y)
for comparison_name1 in result1:
for comparison_name2 in result2:
test1=result1[comparison_name1]
test2=result2[comparison_name2]
if y in test1 and y in test2:
a=test1
b=test2
test=[a,b]
images = [Image.open(x) for x in test]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
x_offset += im.size[0]
output_name='output'+y+'.png'
new_im.save(os.path.join(directory, output_name))
I did a Python version as well, it's not quite as fast but it is maybe closer to your heart :-)
#!/usr/bin/env python3
import cv2
import numpy as np
from multiprocessing import Pool
def doOne(params):
"""Append the two input images side-by-side to output the third."""
imA = cv2.imread(params[0], cv2.IMREAD_UNCHANGED)
imB = cv2.imread(params[1], cv2.IMREAD_UNCHANGED)
res = np.hstack((imA, imB))
cv2.imwrite(params[2], res)
if __name__ == '__main__':
# Build the list of jobs - each entry is a tuple with 2 input filenames and an output filename
jobList = []
for i in range(1000):
# Horizontally append a-XXXXX.png to b-XXXXX.png to make c-XXXXX.png
jobList.append( (f'a-{i:05d}.png', f'b-{i:05d}.png', f'c-{i:05d}.png') )
# Make a pool of processes - 1 per CPU core
with Pool() as pool:
# Map the list of jobs to the pool of processes
pool.map(doOne, jobList)
You can do this a little quicker with libvips. To join two images left-right, enter:
vips join left.png out.png result.png horizontal
To test, I made 200 pairs of 1200x800 PNGs like this:
for i in {1..200}; do cp x.png left$i.png; cp x.png right$i.png; done
Then tried a benchmark:
time parallel vips join left{}.png right{}.png result{}.png horizontal ::: {1..200}
real 0m42.662s
user 2m35.983s
sys 0m6.446s
With imagemagick on the same laptop I see:
time parallel convert left{}.png right{}.png +append result{}.png ::: {1..200}
real 0m55.088s
user 3m24.556s
sys 0m6.400s
You can do that much faster without Python, and using multi-processing with ImageMagick or libvips.
The first part is all setup:
Make 20 images, called a-000.png ... a-019.png that go from red to blue:
convert -size 64x64 xc:red xc:blue -morph 18 a-%03d.png
Make 20 images, called b-000.png ... b-019.png that go from yellow to magenta:
convert -size 64x64 xc:yellow xc:magenta -morph 18 b-%03d.png
Now append them side-by-side into c-000.png ... c-019.png
for ((f=0;f<20;f++))
do
z=$(printf "%03d" $f)
convert a-${z}.png b-${z}.png +append c-${z}.png
done
Those images look like this:
If that looks good, you can do them all in parallel with GNU Parallel:
parallel convert a-{}.png b-{}.png +append c-{}.png ::: {1..19}
Benchmark
I did a quick benchmark and made 20,000 images a-00000.png...a-019999.png and another 20,000 images b-00000.png...b-019999.png with each image 1200x800 pixels. Then I ran the following command to append each pair horizontally and write 20,000 output images c-00000.png...c-019999.png:
seq -f "%05g" 0 19999 | parallel --eta convert a-{}.png b-{}.png +append c-{}.png
and that takes 16 minutes on my MacBook Pro with all 12 CPU cores pegged at 100% throughout. Note that you can:
add spacers between the images,
write annotation onto the images,
add borders,
resize
if you wish and do lots of other processing - this is just a simple example.
Note also that you can get even quicker times - in the region of 10-12 minutes if you accept JPEG instead of PNG as the output format.

convert image 75 dpi to 300 dpi using Imagemagick PHP

I'm attempting to increase a very low resolution jp2 image to a higher DPI so that the image can been seen without any inconvenience to our eyes.
I have been successful in reading a jpeg2000 encoded string and displaying it as a PNG file. (Below is the code)
$imagedata = "AAAADGpQICANCocKAAAAFGZ0eXBqcDIgAAAAAGpwMiAAAAAtanAyaAAAABZpaGRyAAAAyAAAAKAAAwcHAAAAAAAPY29scgEAAAAAABAAAAGXanAyY/9P/1EALwAAAAAAoAAAAMgAAAAAAAAAAAAAAKAAAADIAAAAAAAAAAAAAwcBAQcBAQcBAf9SAAwAAAABAQUEBAAA/1wAI0JvGG7qbupuvGcAZwBm4l9MX0xfZEgDSANIRU/ST9JPYf9kACIAAUNyZWF0ZWQgYnk6IEpKMjAwMCB2ZXJzaW9uIDQuMf+QAAoAAAAAAQMAAf9SAAwAAAABAQUEBAAA/5PPoKgT/dHUscn3uMJWDWKb153z8hPvSInB8QsdvHSg4pzoLevV6cHhwCOWrDWed1zB8RKHyC4PEhigx/MYuIx4wci8q/CEo2kiHBrV8DhszG7ymZ/UH7atm39cdbppgIDD4VYfCrB00E+GI+Qf3v1IHzVdC6k/pMRXolANASf+TQYCTKERfZoHB65rCU23EcMzjiQo+2MAmLli7aos4tyAgMOrw6tBVpk5rPA9rz1HB6Wn+siLUizMFl3TKpn7s1pJGcCba3pGnanMUNO8OP+EwaMdppACpwb6vbqSpeUbgICAgICAgID/2Q==";
$image=base64_decode($imagedata);
// Create Imagick object
$im = new Imagick();
// Convert image into Imagick
$im->readImageBlob($image);
//Set the output format
$im->setImageFormat("png");
header('Content-type: image/png');
echo $im;
I read it is a possibility to increase the DPI using ImageMagick. See here http://www.imagemagick.org/discourse-server/viewtopic.php?t=18241
How do I achieve this in my PHP script (NOT through command line) ? Any help and guidance would be very much appreciated.
If you look at the UK Government website for the Passport Office, it says that passport photos need to be at least 600px wide by 750px tall.
Let's start with a photo of adequate quality (if not content) for Mr Bean at 600x750:
If we now resize him down to the same as your image (160x200), then back up you will see the quality has suffered through trying to represent the image at 160x200 and you can't invent all those pixels you lost - they are gone for good. Look at his teeth and the highlights in his eyes:
convert bean.jpg -resize 160x200 -resize 600x750 result.jpg
So, all you can do in Imagick is:
Imagick::resizeImage ( int $columns , int $rows , int $filter , float $blur [, bool $bestfit = FALSE [, bool $legacy = FALSE ]] )
to go back up to 600x750 and experiment with setting the filter to Catrom or Lanczos. But you can't invent stuff that isn't there...

OpenCV read image from csv file

I have image in csv file and i want to load it in my program. I found that I can load image from cvs like this:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
cv::namedWindow("img");
cv::imshow("img", img);
I have RGB picture in that file but I got grey picture... Can somebody explain me how to load color image or how can I modify this code to get color image?
Thanks!
Updated
Ok, I don't know how to read your file into OpenCV for the moment, but I can offer you a work-around to get you started. The following will create a header for a PNM format file to match your CSV file and then append your data onto the end and you should end up with a file that you can load.
printf "P3\n284 177\n255\n" > a.pnm # Create PNM header
tr -d ',][' < izlaz.csv >> a.pnm # Append CSV data, after removing commas and []
If I do the above, I can see your bench, tree and river.
If you cannot read that PNM file directly into OpenCV, you can make it into a JPEG with ImageMagick like this:
convert a.pnm a.jpg
I also had a look at the University of Wisconsin ML data archive, that is read with those OpenCV functions that you are using, and the format of their data is different from yours... theirs is like this:
1000025,5,1,1,1,2,1,3,1,1,2
1002945,5,4,4,5,7,10,3,2,1,2
1015425,3,1,1,1,2,2,3,1,1,2
1016277,6,8,8,1,3,4,3,7,1,2
yours looks like this:
[201, 191, 157, 201 ... ]
So maybe this tr command is enough to convert your data:
tr -d '][' < izlaz.csv > TryMe.csv
Original Answer
If you run the following on your CSV file, it translates commas into newlines and then counts the lines:
tr "," "\n" < izlaz.csv | wc -l
And that gives 150,804 lines, which means 150,804 commas in your file and therefore 150,804 integers in your file (+/- 1 or 2). If your greyscale image is 177 rows by 852 columns, you are going to need 150,804 RGB triplets (i.e. 450,000 +/- integers) to represent a colour image, as it is you only have a single greyscale value for each pixel.
The fault is in the way you write the file, not the way you read it.
To see color image I must set number of channels. So this code works for me:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
img= img.reshape(3); //set number of channels

Resources