I want to merge 2 layers using script-fu code - gimp

I am new to script-fu in gimp.
I want to merge 2 layers using script-fu code. But I don't know how to write down the function. Can you help me write the (.scm) function for me

Related

Maxima plots auto save

I am using Maxima and I have a lot of resulting plots that I want to save on drive for other uses (making GIF...etc)
This is what I am looking at:
Is there any code that can autosave the plots instead of having to save it manually one by one?
Thank you in advance.
Well, one approach is to specify a file name in the arguments of plot2d. Then the plot is output directly to the file and it doesn't show up in the GUI. E.g.,
plot2d (sin(x), [x, 0, 10], [png_file, "mysinplot.png"]);
plot2d recognizes png_file, pdf_file, ps_file and svg_file. In each case, ? png_file, etc, will show some info about that.
Note that there isn't any file output flag for GIF output. The closest thing is PNG which is similar to GIF.
I think draw also recognizes different file formats but I don't know about that without searching the documentation.
If you are generating a lot of plots, it might be convenient to automatically generate file names via sconcat, e.g. sconcat("myplot", i, ".png") produces "myplot10.png" when i is equal to 10.

Create/Apply grunge-vintage-worn- old-scratchy filters in iOS

Does anybody knows how to create/apply grunge or vintage-worn filters? I'm creating an iOS app to apply filters to photos, just for fun and to learn more about CIImage. Now, I'm using Core-Image to apply CIGaussianBlur, CIGloom, and the like through commands such as ciFilter.setValue(value, forKey:key) and corresponding commands.
So far, core image filters such as blur, color adjustment, sharpen , stylize work OK. But I'd like to learn how to apply one of those grunge, vintage-worn effects available in other photo editing apps, something like this:
Does anybody knows how to create/apply those kind of filters?
Thanks!!!
You have two options.
(1) Use "canned" filters in a chain. If the output of one filter is the input of the next, code things that way. It won't waste any resources until you actually call for output.
(2) Write your own kernel code. It can be a color kernel that mutates a single pixel independently, a warp kernel that checks the values of a pixel and it's surrounding ones to generate the output pixel, or a general kernel that isn't optimized like the last two. Either way, you can use GLSL pretty much for the code (it's pretty much C language for the GPU).
Okay, there's a third option - a combination of the two above options. Also, in iOS 11 and above, you can write kernels using Metal 2.

Metal: Do I need multiple RenderPipelines to have multiple shaders?

I am very new to metal so bear with me as I am transitioning from the ugly state machine calls of OpenGL to modern graphics frameworks. I really want to make sure I understand how everything works and works together.
I have read most of Apples documentation but it does a better job describing the function of individual components than how they come together.
I am trying to understand essentially whether I should have multiple renderPipelines and renderEncoders are needed in my situation.
To describe my pipeline at a high level here is what goes on:
Retrieve the previous frame's contents from an offscreen texture that was rendered to and draw some new contents onto it.
Swith to rendering on the screen. Draw the texture from step 1 to the screen.
Do some post processing (in native resolution).
Draw the UI ontop as quads. (essentailly a repeat of 2)
So in essence there will be the following vertex/fragment shader pairs
Draw the entities (step 1)
Draw quads on a specefied area (step 2 and 4)
Post processing shader 1 (step 3) uses different inputs than D and cant be done in the same shader
Post processing shader 2 (step 3) uses different inputs than C and can't be done in the same shader
There will be the following texture groups
Texture for each UI element
Texture for the offscreen drawing done in step 1
Potentially more offscreen textures will be used in post processing depening on metals preformance
Ultimately my confusions are this:
Q1. Render Pipelines take only one vertex and one fragment function so does this mean I need have 4 render pipelines even though I only have 3 unique steps to my drawing procedure?
Q2. How am I supposed to use multiple pipelines in one encoder? Wouldn't each sucessive call on .setRenderPipelineState override the previous one?
Q3. Would you recommend keeping all of my .setFragmentTexture calls right after creating my encoder or do I need to set those only right before they are needed.
Q4. Is it valid to keep my depthState constant even as I switch between pipelineStates? How do I ensure that my entities on step 1 are rendered with depth but make sure depth information is lost between frames so entities are all on top of the previous contents?
Q5. What do I do with render step 3 where I have two post processing steps? Do those have to be seperate pipelines?
Q6. How can I efficiently build my pipeline knowing that steps 2 and 4 are essentially the same just with different inputs?
I guess it would help me if someone would walk me through what renderPipelineObjects I will need and for what. It would also be useful to understand what some of the renderCommandEncoder commands might look like at a psuedocode level.
Q1. Render Pipelines take only one vertex and one fragment function so does this mean I need have 4 render pipelines even though I only have 3 unique steps to my drawing procedure?
If there are 4 unique combinations of shader functions, then it's not correct that you "only have 3 unique steps to my drawing procedure". In any case, yes, you need a separate render pipeline state object for each unique combination of shader functions (as well as for any other attribute of the render pipeline state descriptor that you need to change).
Q2. How am I supposed to use multiple pipelines in one encoder? Wouldn't each sucessive call on .setRenderPipelineState override the previous one?
When you send a draw method to the render command encoder, that draw command is encoded with all of the relevant current state and written to the command buffer. If you later change the render pipeline state associated with the encoder that doesn't affect previously-encoded commands, it only affects subsequently-encoded commands.
Q3. Would you recommend keeping all of my .setFragmentTexture calls right after creating my encoder or do I need to set those only right before they are needed.
You only need to set them before the draw command that uses them is encoded. Beyond that, it doesn't much matter when you set them. I'd do whatever makes for the clearest, most readable code.
Q4. Is it valid to keep my depthState constant even as I switch between pipelineStates?
Yes, or there wouldn't be separate methods to set them independently. There would be a method to set both.
How do I ensure that my entities on step 1 are rendered with depth but make sure depth information is lost between frames so entities are all on top of the previous contents?
Configure the loadAction for the depth attachment in the render pass descriptor to clear with an appropriate value (e.g. 1.0). If you're using multiple render command encoders, only do this for the first one, of course. Likewise, the render pass descriptor of the last (or only) render command encoder can/should use a storeAction of .dontCare.
Q5. What do I do with render step 3 where I have two post processing steps? Do those have to be seperate pipelines?
Well, the description of your scenario is kind of vague. But, if you want to use a different shader function, then, yes, you need to use a different render pipeline state object.
Q6. How can I efficiently build my pipeline knowing that steps 2 and 4 are essentially the same just with different inputs?
Again, your description is entirely too vague to know how to answer this. In what ways are those steps the same? In what ways are they different? What do you mean about different inputs?
In any case, just do what seems like the simplest, most direct way even if it seems like it might be inefficient. Worry about optimizations later. When that time comes, open a new question and show your actual working code and ask specifically about that.

trace patterns such that each node is visited only once(eulerian path) using opencv

Here is my problem which I am trying to solve since one complete year. With no success till end of the year. I have to seek help and a concrete solutions from the stackoverflow experts.
My problem statement:
I have been working with some design patterns which I want to trace if eulerian path exist(as shown in below gifs), programmatically. Below are the patterns and the way I wanna draw them(gifs).
What I wanna achieve:
Give the design pattern images as input. I want trace the design pattern image in a single stroke as shown in the gifs(gifs animations are just examples of how the patterns is drawn in single stroke). Once I get the x and y coordinates of the image in single stroke fashion(eulerian path). I will feed those coordinates to my program to just trace those coordinates.
Thing to be noted in the animation:
1) basically its an undetected graph (the nodes being the vertices of your shapes, the edges if exists being the strokes between 2 vertices). (eulerian path)
Here are the 15 unique shapes which I used to build the patterns with:
I have more then 400 patterns(3 patterns already shown below) and till now I am not able to find a generic solution for this. I have manually got the x y coordinates of the patterns and placed it in sequence. But that is not at all scalable.
How to trace the patterns such that each node is visited only once ?:
1st kind of pattern and the way it should be drawn:
2nd kind of pattern and the way it should be drawn:
3rd kind of pattern and the way it should be drawn:
Perhaps you can look into the traveling salesman problem if your still struggling with the above. TSP visits cities only once. And if in your case each node is a crossing for your strike-through then this might help.
Check here for the python code to look at. I've checked and the print statement looks nice and structured. Well done cMinor!
Edit based on discussion: file 1, file2, file3.

how to use imageJ to merge different channels and do a z stack average

I have ten tiff files and each of them contains two channels imaging data, I want to labeling different colors for the two channels and after that do a z projection, anyone knows how to do it?
First of all, make yourself familiar with ImageJ's concepts of displaying pseudocolor and composite images.
Use File > Import > Image Sequence... to open your tiff files as a stack.
You might need to use Image > Hyperstacks > Stack to Hyperstack... to convert your stack into a 2-channel, 10-slice hyperstack.
Then use Image > Stacks > Z Project... to create the z projection.
Using the macro recorder while performing these commands will give you the code (needed to automate the task):
run("Image Sequence...", "open=/path/to/your/files file=tiff sort");
run("Stack to Hyperstack...", "order=xyczt(default) channels=2 slices=10 frames=1 display=Color");
run("Z Project...", "projection=[Average Intensity]");

Resources