I am trying to figure out why my GIMP plugin uses more and more RAM during it's execution. I wrote a simple test plugin to check if deleting images through pdb.gimp_image_delete works as intended:
image_id = pdb.gimp_image_new(500, 500, RGB)
while True:
duplicate = pdb.gimp_image_duplicate(image_id)
pdb.gimp_image_delete(image_id)
image_id = duplicate
print pdb.gimp_image_list()
Image list looks fine - in every iteration pdb.gimp_image_list shows that there is only one image, but RAM usage grows rapidly. It's close 1GB after 1 min of execution! It looks like gimp_image_delete leaves image in memory somehow or something else is causing this. Any ideas how to solve this? I thought that it may be fault of gimp_image_duplicate, but replacing duplicate = pdb.gimp_image_duplicate(image_id) with image_id = pdb.gimp_image_new(500, 500, RGB) gives the same effect. I also tried gimp.delete function.
Related
I'm using fswebcam to capture an image using node-red exec block running on a raspberry pi.
The time it takes to capture the image is 3+ seconds.
fswebcam -r 1280x720 image.jpg
I tried the same using OpenCV and the result is a little better but similar.
from cv2 import *
cam = VideoCapture(1)
s, img = cam.read()
if s:
imwrite("/home/pi/pythontest/tt.jpg",img) #save image
cam.release()
I'm guessing that it takes some time for the USB camera to initialize and take a picture which increases the time drastically. Is there any way to keep the camera initialized?
Any other workarounds to ameliorate this issue?
There may be other methods, but one way to do this is to run the camera continuously during periods when you want faster responses. You will need to consider some things though:
bandwidth used to capture images
wear on your SD card
access incomplete images midway through capture.
I'll leave you to determine what USB bandwidth you need for the resolution you are using.
As regards the second - wear on your SD card - I would suggest you capture to /tmp and ensure that is based on a RAM filesystem by becoming root and adding a line like this to your /etc/fstab:
tmpfs /tmp tmpfs defaults,noatime,nosuid 0 0
Then reboot. This way the data never goes near your SD card.
As regards the third - incomplete images still being captured - you can leverage the --exec option of fswebcam to get around this. Basically, you capture to one file and then after it is complete, you use --exec to rename the file to /tmp/latest.jpg and you use that in your application.
fswebcam -r 640x480 --loop 1 --exec 'mv /tmp/inprogress.jpg /tmp/latest.jpg' /tmp/inprogress.jpg
This relies on the fact that, under Unix at least, renaming a file does not affect any process that has that file open and that renaming is atomic. So your application will always either get either the entire new or the entire old file and never half a file still being written.
My camera produces images around 160kB, so I tested the file size like this in a tight loop, reading the file as fast as possible and only notifying me if it is far less than the normal size, i.e. truncated:
while : ; do l=$(wc -c < latest.jpg); [[ $l -lt 140000 ]] && echo $l; done
Try to profile your code (using cProfile e.g.) to ensure that issue is not in python interpreter start-up time or imwrite.
If the issue in a camera initialization, then I suppose that the only options is to write a daemon that will keep camera online and give you an image at your request
I'm running a machine learning pipeline for segmentation of very large 3D images. I would like to store the results (dask arrays) as .png files, with each file corresponding to one slice of the dask array. Do you have any suggestions on how to implement this?
I have been trying to save the results by building a parallel for loop using the joblib dask parallel backend and then looping through the results slice by slice. This works fine until a certain point at which my pipe gets stuck without any apparent reason (no memory issue, no too many open file descriptors etc.).
array_to_save has been persisted in memory with client.persist()
with joblib.parallel_backend('dask'):
joblib.Parallel(verbose=100)(joblib.delayed(png_sav)(j, stack_height, client.compute(array_to_save[j])) for j in range(stack_height))
def png_sav(j, stack_height, prediction):
img = Image.fromarray(prediction.result().astype('uint32'), 'I') # I to save as 16 bit binary image
img.save(png_pn+str(j)+'_slice_prediction.png', "PNG")
img.close()
You might consider using either ...
the map_blocks method to call a function on every block of your data. Your function can take a block_info= keyword argument if it wants to know where it is in the stack.
Convert your array to a list of delayed arrays. Maybe something like this (untested, you should read the docs here)
x = x.rechunk((1, None, None)) # many chunks along the first axis
slices = x.to_delayed().flatten()
saves = [dask.delayed(numpy_array_to_png)(slc, filename='...') for slc in slices]
dask.compute(*saves)
Check in with the dask-image project. I suspect that they have something https://github.com/dask/dask-image
thanks a lot for your hints.
I'm trying to understand how to use .map_blocks() and in particular block_info=. But I don't understand how to use the information given by block_info. I would like to save each chunk separately but don't know how to do this. Any hints? Thanks a lot!
da.map_blocks(png_sav(stack_height, prediction,
block_info=True), dtype='uint16')
def png_sav(stack_height, prediction, block_info=True):
# I don't get how I can save each chunk separately
img = Image.fromarray("prediction_chunk".astype('uint32'), 'I') # I to save as 16 bit binary image
img.save(png_pn+str(j)+'_slice_prediction.png', "PNG")
img.close()
Consider I have created an image unknown.tiff from a page of a PDF, named doc.pdf, where the exact command for conversion aren't known. The size of this image is ~ 1MB.
The exact command isn't known, but it is known that the changes are majorly on depth and density. (A subset of these two, would do too)
Now, the normal command pattern is:
convert -density 300 PDF.pdf[page-number] -depth 8 image.tiff
But this gives me a file of ~17 MB, which obviously isn't the one I am looking for. If I remove depth, then I get a file of ~34 MB, and when I remove both, I get a blurred image of 2 MB. I also removed density only, then too the results don't match (~37 MB).
Since the output size of the image unknown.tiff is so low, I've hypothesized that it might take less time to get produced.
Since the time of conversion is of great concern to me, I want to know the ways I can come to the exact command which produced unknown.tiff
I'm planning to process quite a large number of images and would like to average every 5 consecutive images. My images are saved as .dm4 file format.
Essentially, I want to produce a single averaged image output for each 5 images that I can save. So for instance, if I had 400 images, I would like to get 80 averaged images that would represent the 400 images.
I'm aware that there's the Running Z Projector plugin but it does a running average and doesn't give me the reduced number of images I'm looking for. Is this something that has already been done before?
Thanks for the help!
It looks like the Image>Stacks>Tools>Grouped Z_Projector does exactly you want.
I found it by opening the command finder ('L') and filtering on "project".
Please bear with me, I know that what I'm doing can sound strange, but I can guarantee there's a very good reason for that.
I took a movie with my camera, as avi. I imported the movie into iMovie and then exploded the single frames as PNG. Then I repacked these frames into mov using the following code
movie, error = QTMovie.alloc().initToWritableFile_error_(out_path, None)
mt = QTMakeTime(v, scale)
attrib = {QTAddImageCodecType: "jpeg"}
for path in png_paths:
image = NSImage.alloc().initWithContentsOfFile_(path)
movie.addImage_forDuration_withAttributes_(image, mt, attrib)
movie.updateMovieFile()
The resulting mov works, but it looks like the frames are "nervous" and shaky when compared to the original avi, which appears smoother. The size of the two files is approximately the same, and both the export and repacking occurred at 30 fps. The pics also appear to be aligned, so it's not due to accidental shift of the frames.
My question is: by knowing the file formats and the process I performed, what is the probable cause of such result ? How can I fix it ?
One textbook reason for "shaky" images are field mode issues. Any chance you are working with interlaced material and got your field order messed up? This would cause the results you described...
As for how you may fix this using the API you're using (QTKit?) I am at loss, though, due to lacking experience with it.