I am using Maxima and I have a lot of resulting plots that I want to save on drive for other uses (making GIF...etc)
This is what I am looking at:
Is there any code that can autosave the plots instead of having to save it manually one by one?
Thank you in advance.
Well, one approach is to specify a file name in the arguments of plot2d. Then the plot is output directly to the file and it doesn't show up in the GUI. E.g.,
plot2d (sin(x), [x, 0, 10], [png_file, "mysinplot.png"]);
plot2d recognizes png_file, pdf_file, ps_file and svg_file. In each case, ? png_file, etc, will show some info about that.
Note that there isn't any file output flag for GIF output. The closest thing is PNG which is similar to GIF.
I think draw also recognizes different file formats but I don't know about that without searching the documentation.
If you are generating a lot of plots, it might be convenient to automatically generate file names via sconcat, e.g. sconcat("myplot", i, ".png") produces "myplot10.png" when i is equal to 10.
Related
I am using JsPdf to generate pdf from multiple images, the issue is that I get the same image generated in all pdf files. any idea please.
I had a similar problem when using multiple canvases to generate multi-page PDF document, I was originally using the default format (PNG), so after several hours going through my code I decided to change the format to JPEG, what do you know, the problem went away. Here is the call:
doc.addImage(canvas.toDataURL("image/jpeg"), "JPEG", 0, 0, canvas.width, canvas.height);
Have a look at the parameter list of addImage():
jsPDFAPI.addImage = function(imageData, format, x, y, w, h, alias, compression, rotation)
If you add multiple different images but somehow set alias to the same for all, jsPDF will reuse the first of those images. This is intended behaviour and reduces the output size.
I recommend to always set alias to something unique for unique images. If alias is not set, jsPDF will calculate a hash and for large images, this can be quite expensive.
[Edit, as I can't comment directly to marwen web's answer below:
addImage() has no option split, so I don not know what you mean. Perhaps you can give an example in case other users have the same problem?]
thank you for your answer, actually the problem was caused by an option added in the call of the function, it is caused by the option "split".i use the PNG format withount any problem.
I know this has been posted elsewhere and that this is no means a difficult problem but I'm very new to writing macros in FIJI and am having a hard time even understanding the solutions described in various online resources.
I have a series of images all in the same folder and want to apply the same operations to them all and save the resultant excel files and images in an output folder. Specifically, I'd like to open, smooth the image, do a Max intensity Z projection, then threshold the images to the same relative value.
This thresholding is one step causing a problem. By relative value I mean that I would like to set the threshold so that the same % of the intensity histogram is included. Currently, in FIJI if you go to image>adjust>threshold you can move the sliders such that a certain percentage of the image is thresholded and it will display that value for you in the open window. In my case 98% is what I am trying to achieve, eg thresholding all but the top 2% of the data.
Once the threshold is applied to the MIP, I convert it to binary and do particle analysis and save the results (summary table, results, image overlay.
My approach has been to try and automate all the steps/ do batch processing but I have been having a hard time adapting what I have written to work based on instructions found online. Instead I've been just opening every image in the directory one by one and applying the macro that I wrote, then saving the results manually. Obviously this is a tedious approach so any help would be much appreciated!
What I have been using for my simple macro:
run("Smooth", "stack");
run("Z Project...", "projection=[Max Intensity]");
setAutoThreshold("Default");
//run("Threshold...");
run("Convert to Mask");
run("Make Binary");
run("Analyze Particles...", " show=[Overlay Masks] display exclude clear include summarize in_situ");
You can use the Process ▶ Batch ▶ Macro... command for this.
For further details, see the Batch Processing page of the ImageJ wiki.
So, I have 20 positive samples and 500 negative samples. I created the .vec file using createsample utility.Now, when i try to train the classifier using the traincascade.exe utility, I run into the following error:
I have looked into many solutions given to people who have faced similar issues, but none of them worked.
Things I tried: 1. Increasing the negative sample size 2. Checking the path of the negative(or background images) stored in the Negative.txt file 3. Varying different parameters.
Here is some information regarding the path: My working directory has the following files: 1. Traincascade.exe 2. Positive image folder 3. NegativeImageFolder 4. vec file 5. Negative.txt (file that has path to images in the negative image folder)
My Negative.txt file has the absolute file path for the images in the negative image folder. I also tried changing the file path to the following format:
NegativeImageFolder\Image1.pgm
but didn't work! I tried both front and backslash too!
I have run out of ways to change the file path or make any modification to make this work!
First of all: is NumStages 1 and maxDepth 1 intentional?
Looking at Opencv's source code (cascadeclassifier.cpp, imagestorage.cpp), the error is thrown when in function
bool CvCascadeClassifier::updateTrainingSet( double& acceptanceRatio)
a number, negCount=500, of negative samples cannot be filled.
Before, everything was ok with positive samples (and the line about pos count that was printed on the screen is a proof of this).
Digging deep into source code negCount cannot be filled when imgReader.getNeg( img ) returns false, this means it cannot provide any image, which in turn happens when the list of source negatives is empty.
So you have to concentrate all your efforts in the direction of providing the algorithm with the correct list of negative images.
There are two ways to solve this: make sure that Negative.txt is read and all paths are regular and that every image in the list can be read regularly.
Is the file name “Negative.txt” or “Negatives.txt”?
Anyway with so few positive and negative samples you won’t train anything functioning, it is only useful to make you understand how the process of training works.
Well I was able to resolve the issue and run the train the classifier successfully. However, I am not 100% sure as to how the change I made helped.
This is what I did:
I was generating the Negative.txt file using Excel. I would enter the file path of one image and increment the image filename (since my images were name image1, image2, image3...). So the format as mentioned earlier would be :
C:\OpenCV-3.0.0\opencv\build\x64\vc12\bin\Negative\Image1.pgm
And finally save the file as a Unicode txt document. However, saving it as a unicode txt document gave me the error stated in the question. I saved it as a Text (tab delimited) file and it worked.
Apologies for tagging this just ImageJ - it's a problem regarding MicroManager, a microscopy plugin for it and I thought this would be best.
I'd recently taken images for an important experiment using MicroManager (a recent version, though I cannot recall the exact number). The IT services at my institution have recently been having some networking problems and my saved preferences for the software had been erased. I'd got half way through my experiment when I realised that I'd saved my images as separate image files (three greyscale TIFFs plus metadata text files) instead of OME-TIFF iamge stacks.
All of my ImageJ macros for image processing rely on having a multiple channel image stack, so this is a bit of a problem. Is there any easy way in MicroManager (or ImageJ) to bulk convert these single channel greyscale images into the OME-TIFF image stack after the images have already been taken?
Cheers.
You can start with a macro like this one:
// Convert your images to a stack
run("Images to Stack", "name=Stack title=[] use");
// The stack will default the images to time points. Convert to channels
run("Stack to Hyperstack...", "order=xyczt(default) channels=3 slices=1 frames=1 display=Color");
// Export as OME-TIFF
run("Bio-Formats Exporter");
This is designed to reconstruct one dataset at a time (open 3 images, run the macro and export the OME-TIFF).
If you don't want any dialogs to show you can pass an output directory to the Bio-Formats exporter:
run("Bio-Formats Exporter", "save=/path/to/image.ome.tif export compression=Uncompressed");
For the output file name you can get the original image name in the macro with getTitle()
There is also a template example on iterating over all the files in a directory, if you want to completely automate the macro. However this may take some tweaking since you want to operate on your images 3 at a time.
Hope that helps!
I am doing image manipulation on the png images. I have the following problem. After saving an image with imwrite() function, the size of the image is increased. For example previously image is 847KB, after saving it becomes 1.20 MB. Here is a code. I just read an image and then save it, but the size is increased. I tried to set compression params but it doesn't help.
Mat image;
image = imread("5.png", -1);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
compression_params.push_back(0);
imwrite("output.png",image,compression_params);
What could be a problem? Any help please.
Thanks.
PNG has several options that influence the compression: deflate compression level (0-9), deflate strategy (HUFFMAN/FILTERED), and the choice (or strategy for dynamically chosing) for the internal prediction error filter (AVERAGE, PAETH...).
It seems OpenCV only lets you change the first one, and it hasn't a good default value for the second. So, it seems you must live with that.
Update: looking into the sources, it seems that compression strategy setting has been added (after complaints), but it isn't documented. I wonder if that source is released. Try to set the option CV_IMWRITE_PNG_STRATEGY with Z_FILTERED and see what happens
See the linked source code for more details about the params.
#Karmar, It's been many years since your last edit.
I had similar confuse to yours in June, 2021. And I found out sth which might benefit others like us.
PNG files seem to have this thing called mode. Here, let's focus only on three modes: RGB, P and L.
To quickly check an image's mode, you can use Python:
from PIL import Image
print(Image.open("5.png").mode)
Basically, when using P and L you are attributing 8 bits/pixel while RGB uses 3*8 bits/pixel.
For more detailed explanation, one can refer to this fine stackoverflow post: What is the difference between images in 'P' and 'L' mode in PIL?
Now, when we use OpenCV to open a PNG file, what we get will be an array of three channels, regardless which mode that
file was saved into. Three channels with data type uint8, that means when we imwrite this array into a file, no matter
how hard you compress it, it will be hard to beat the original file if it was saved in P or L mode.
I guess #Karmar might have already had this question solved. For future readers, check the mode of your own 5.png.