I have many layers with irregular boundaries, and a background. I would like to "blend in" the images to the background by applying some sort of filter on their boundaries, gradually. Is there a way of doing this automatically?
What I tried is:
Merge all layers
Select the background of this combined layer
Invert selection
Apply feather to the selection
Fill in the selection with white colour in the mask layer
This method sort of works, but my problem is I have overlapping layers. This means the method above fades the boundaries of the layers to the background but not to each other.
I never tried scripting in GIMP, but I'm more than willing to try it, if anyone has a working solution.
Yes - that can be scriptable.
Basically, scripting in GIMP can perform any action usually possible through the UI, and some that aren't.
So, once you identify the steps you need to perform for each desired layer -
like, selecting by color with maximum threshold and disalowing transparency - (that should give you a selection with the shape of the layers' visible contents).
Then, shrinking, inverting and feathering the selection
and applying "gimp edit cut", or, programatically "gimp-edit-fill", with "TRASPARENT_FILL".
For each of these actions, you can check the avaliable calls under help->procedure_browser.
Now, to create a GIMP Python script, what you have to do is:
create a Python 2 program that will import everything from the "gimpfu" module; place it as an exectuable in a GIMPs plug-in folder (check the folders atedit->preferences->folders`
In the script, you write your main function and any others - the main function may take as input parameters any GIMP object, like Image, Drawable, color, palette, r simply string and integer you want -
And after that you place an appropriate call to the register gimpfu.register function - that will enable your script as a plug-in, with its own menu option in GIMP. Finish the script by placing a call to gimpfu.main().
Also, there are no "ready" ways for one to select a set of layers in a plug-in,rather than having just the currently active layer as input. As a very convenient workaround for these cases, I abuse the "linked" layer marker (clicking on the layer's dialog, to the imediate right of the visibility icon will display a "chain" icon indicating a layer is linked)
All in all, the template for your plug-in is just:
#! /usr/bin/env python
# coding: utf-8
import gimp
from gimpfu import *
def recurse_blend(img, root, amount):
if hasattr(root, "layers"):
# is image or layer group
for layer in root.layers:
recurse_blend(img, layer, amount)
return
layer = root
if not layer.linked:
return # Ignore layers not marked as "linked" in the UI.
# Perform the actual actions:
pdb.gimp_image_select_color(img, CHANNEL_OP_REPLACE, layer, (0,0,0))
pdb.gimp_selection_shrink(img, amount)
pdb.gimp_selection_invert(img)
pdb.gimp_selection_feather(img, amount * 2)
pdb.gimp_edit_clear(layer)
def blend_layers(img, drawable, amount):
# Ignore drawable (active layer or channel on GIMP)
# and loop recursively through all layers
pdb.gimp_image_undo_group_start(img)
pdb.gimp_context_push()
try:
# Change the selection-by-color options
pdb.gimp_context_set_sample_threshold(1)
pdb.gimp_context_set_sample_transparent(False)
pdb.gimp_context_set_sample_criterion(SELECT_CRITERION_COMPOSITE)
recurse_blend(img, img, amount)
finally:
# Try to restore image's undo state, even in the case
# of a failure in the Python statements.
pdb.gimp_context_pop() # restores context
pdb.gimp_selection_none(img)
pdb.gimp_image_undo_group_end(img)
register(
"image_blend_linked_layers_edges", # internal procedure name
"Blends selected layers edges", # Name being displayed on the UI
"Blend the edges of each layer to the background",
"João S. O. Bueno", # author
"João S. O. Bueno", # copyright holder
"2018", # copyright year(s)
"Belnd layers edges", # Text for menu options
"*", # available for all types of images
[
(PF_IMAGE, "image", "Input image", None), # Takes active image as input
(PF_DRAWABLE, "drawable", "Input drawable", None), # takes active layer as input
(PF_INT, "Amount", "amount to smooth at layers edges", 5), # prompts user for integer value (default 5)
],
[], # no output values
blend_layers, # main function, that works as entry points
menu="<Image>/Filters/Layers", # Plug-in domain (<Image>) followed by Menu position.
)
main()
Related
EDIT: seems like this is a bug in Coremltools. See here:
https://github.com/apple/coremltools/issues/691#event-3295066804
one-sentence summary: the non-maximum suppression layer outputs four values, but I am only able to use two of those four values as inputs to subsequent layers.
I'm building a neural network model using Apple's Core ML Tools package, and I've added a non-maximum suppression (NMS) layer. This layer has four outputs, per the documentation:
output 1: box coordinates, corresponding to the surviving boxes.
output 2: box scores, corresponding to the surviving boxes.
output 3: indices of the surviving boxes. [this is the output I want to use in a subsequent layer]
output 4: number of boxes selected after the NMS algorithm
I've set up a test model with some simple inputs, and the NMS layer correctly returns the expected values for each of the four outputs listed above. If I then use either output#1 or output#2 in a subsequent layer, I have no problems doing so. However, if I try to use output#3 or output#4 in a subsequent layer, I get an error saying that the input was "not found in any of the outputs of the preceding layers."
Jupyter Notebook
The easiest way to replicate the issue is to download and go through this Jupyter Notebook.
Alternatively, the sample code below also walks through the problem I'm having.
Sample Code
I've set up some sample input data as follows:
boxes = np.array(
[[[0.00842474, 0.83051298, 0.91371644, 0.55096077],
[0.34679857, 0.31710117, 0.62449838, 0.70386912],
[0.08059154, 0.74079195, 0.61650205, 0.28471152]]], np.float32)
scores = np.array(
[[[0.87390688],
[0.2797731 ],
[0.72611251]]], np.float32)
input_features = [('boxes', datatypes.Array(*boxes.shape)),
('scores', datatypes.Array(*scores.shape))]
output_features = [('output', None)]
data = {'boxes': boxes,
'scores': scores }
And I've built the test model as follows:
builder = neural_network.NeuralNetworkBuilder(input_features, output_features, disable_rank5_shape_mapping=True)
# delete the original output, which was just a placeholder while initializing the model
del builder.spec.description.output[0]
# add the NMS layer
builder.add_nms(
name="nms",
input_names=["boxes","scores"],
output_names=["nms_coords", "nms_scores", "surviving_indices", "box_count"],
iou_threshold=0.5,
score_threshold=0.5,
max_boxes=3,
per_class_suppression=False)
# make the model output all four of the NMS layer's outputs
for name in ["nms_coords", "nms_scores", "surviving_indices", "box_count"]:
builder.spec.description.output.add()
builder.spec.description.output[-1].name = name
builder.spec.description.output[-1].type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32
# # add a linear activation layer (intentionally commented out)
# builder.add_activation(
# name="identity",
# non_linearity="LINEAR",
# input_name="surviving_indices",
# output_name="activation_output",
# params=[1.0,0.0])
# initialize the model
model = coremltools.models.MLModel(builder.spec)
At this point, when I call
output = model.predict(data, useCPUOnly=True)
print(output)
everything works as expected.
For example, if I call print(output['surviving_indices'], the result is [[ 0. 2. -1.]] (as expected).
However, I'm unable to use either the surviving_indices or the box_count outputs as an input to a subsequent layer. For example, if I add a linear activation layer (by uncommenting the block above), I'll get the following error:
Input 'surviving_indices' of layer 'identity' not found in any of the outputs of the preceding layers.
I have the same problem if I use the box_count output (i.e., by setting input_name="box_count" in the add_activation layer).
I know that the NMS layer is correctly returning outputs named surviving_indices and box_count, because when I call print(builder.spec.neuralNetwork.layers[0]) I get the following result:
name: "nms"
input: "boxes"
input: "scores"
output: "nms_coords"
output: "nms_scores"
output: "surviving_indices"
output: "box_count"
NonMaximumSuppression {
iouThreshold: 0.5
scoreThreshold: 0.5
maxBoxes: 3
}
Furthermore, if I use either the nms_coords or the nms_scores outputs as an input to the linear activation layer, I don't have any problems. It will correctly output the unmodified nms_coords or nms_scores values from the NMS layer's output.
I'm baffled as to why the NMS layer correctly outputs what I want it to, but then I'm unable to use those outputs in subsequent layers.
The issue seems to have been fixed. Upgrade with
pip install coremltools>=5.0b3
From this comment made by a CoreMLTools team member:
https://github.com/apple/coremltools/issues/691#issuecomment-900673764
My macOS is 11.3 and I'm using coremltools version 5.0b3.
Please verify that everything is fixed. Please pip install coremltools==5.0b3 on Big Sur or a Monterey beta, then verify you are no longer seeing this bug.
I'm trying to visualize collisions and different events visually, and am searching for the best way to update color or visual element properties after registration with RegisterVisualGeometry.
I've found the GeometryInstance class, which seems like a promising point for changing mutable illustration properties, but have yet to find and example where an instance is called from the plant (from a GeometryId from something like GetVisualGeometriesForBody?) and its properties are changed.
As a basic example, I want to change the color of a box's visual geometry when two seconds have passed. I register the geometry pre-finalize with
// box : Body added to plant
// X_WA : Identity transform
// FLAGS_box_l : box side length
geometry::GeometryId box_visual_id = plant.RegisterVisualGeometry(
box, X_WA,
geometry::Box(FLAGS_box_l, FLAGS_box_l, FLAGS_box_l),
"BoxVisualGeometry",
Eigen::Vector4d(0.7, 0.5, 0, 1));
Then, I have a while loop to create a timed event at two seconds where I would like for the box to change it's color.
double current_time = 0.0;
const double time_delta = 0.008;
bool changed(false);
while( current_time < FLAGS_duration ){
if (current_time > 2.0 && !changed) {
std::cout << "Change color for id " << box_visual_id.get_value() << "\n";
// Change color of box using its GeometryId
changed = true;
}
simulator.StepTo(current_time + time_delta);
current_time = simulator_context.get_time();
}
Eventually I'd like to call something like this with a more specific trigger like proximity to another object, or velocity, but for now I'm not sure how I would register a simple visual geometry change.
Thanks for the details. This is sufficient for me to provide a meaningful answer of the current state of affairs as well as the future (both near- and far-term plans).
Taking your question as a representative example, changing a visual geometry's color can mean one of two things:
The color of the object changes in an "attached" visualizer (drake_visualizer being the prime example).
The color of the object changes in a simulated rgb camera (what is currently dev::RgbdCamera, but imminently RgbdSensor).
Depending on what other properties you might want to change mid simulation, there might be additional subtleties/nuances. But using the springboard above, here are the details:
A. Up until recently (drake PR 11796), changing properties after registration wasn't possible at all.
B. PR 11796 was the first step in enabling that. However, it only enables it for changing ProximityProperties. (ProximityProperties are associated with the role geometry plays in proximity queries -- contact, signed distance, etc.)
C. Changing PerceptionProperties is a TODO in that PR and will follow in the next few months (single digit unless a more pressing need arises to bump it up in priority). (PerceptionProperties are associated with the properties geometry has in simulated sensors -- how they appear, etc.)
D. Changing IllustrationProperties is not supported and it is not clear what the best/right way to do so may be. (IllustrationProperties are what get fed to an external visualizer like drake_visualizer.) This is the trickiest, due to the way the LCM communication is currently articulated.
So, when we compare possible implications of changing an object's color (1 or 2, above) with the state of the art and near-term art (C & D, above), we draw the following conclusions:
In the near future, you should be able to change it in a synthesized RGB image.
No real plan for changing it in an external visualizer.
(Sorry, it seems the answer is more along the lines of "oops...you can't do that".)
I know this has been posted elsewhere and that this is no means a difficult problem but I'm very new to writing macros in FIJI and am having a hard time even understanding the solutions described in various online resources.
I have a series of images all in the same folder and want to apply the same operations to them all and save the resultant excel files and images in an output folder. Specifically, I'd like to open, smooth the image, do a Max intensity Z projection, then threshold the images to the same relative value.
This thresholding is one step causing a problem. By relative value I mean that I would like to set the threshold so that the same % of the intensity histogram is included. Currently, in FIJI if you go to image>adjust>threshold you can move the sliders such that a certain percentage of the image is thresholded and it will display that value for you in the open window. In my case 98% is what I am trying to achieve, eg thresholding all but the top 2% of the data.
Once the threshold is applied to the MIP, I convert it to binary and do particle analysis and save the results (summary table, results, image overlay.
My approach has been to try and automate all the steps/ do batch processing but I have been having a hard time adapting what I have written to work based on instructions found online. Instead I've been just opening every image in the directory one by one and applying the macro that I wrote, then saving the results manually. Obviously this is a tedious approach so any help would be much appreciated!
What I have been using for my simple macro:
run("Smooth", "stack");
run("Z Project...", "projection=[Max Intensity]");
setAutoThreshold("Default");
//run("Threshold...");
run("Convert to Mask");
run("Make Binary");
run("Analyze Particles...", " show=[Overlay Masks] display exclude clear include summarize in_situ");
You can use the Process ▶ Batch ▶ Macro... command for this.
For further details, see the Batch Processing page of the ImageJ wiki.
I need a solution to compare two scanned images.
I have an image of an application form (unfilled), I need to compare that against other images of the same form, and want to detect whether there is any totally unfilled application form.
I just tried with Emgu CV AbsDiff, MatchTemplate etc, but none of them give me a 100 % match, even if I scanned the same form twice in the same scanner, could be because of the noise in the scanning, I can apply a tolerance but the problem is that I need to find out whether the user has filled anything in it. If I apply a tolerance then small changes in the form will not be detected.
I also had a look at the Python Image Libray, Accord.Net etc but couldn't find an approach for comparing this type of image.
Any suggestions on how to do this type of image comparison ?
Is there any free or paid library available for this ?
Only inspect the form fields. Without distractions it's easier to detect small changes. You also don't have to inspect each pixel. Use the histogram or mean color. I used SimpleCV:
from SimpleCV import *
form = Image("form.png")
form_field = form.crop(34, 44, 200, 30)
mean_color = form_field.meanColor()[0] # For simplicity only red channel.
if mean_color < 253:
print "Field is filled"
else:
print "Field is empty"
Alternatively look for features. E.g. corners, blobs or key points. Key points will ignore noise and will work better with poor scans:
key_points = form_field.findKeypoints()
if key_points:
print "Field is filled"
key_points.show(autocolor=True)
else:
print "Field is empty"
I've a .xcf-file with 20+ layers that I use for making a sprite-file.
I'd like to save all these layers to separate files with only the content and size of each layer.
I found this script for gimp: https://github.com/jiilee/gimp
Unfortunately that script creates files with the full size of the image and not the size of each layer.
To give an example:
An image with that is 700px wide and 400px high.
A layer is placed at x: 100px, y: 29px with width: 72px, height: 21px;
The script I found makes a file that is 700px x 400px and not as I need 72px x 21px.
Is it possible to do this automatically?
Yes, but I'd recommend the use of Python as a scripting language in this case, rather than script-fu.
For something like this, you would not even need a plug-in, you can just type something like the following on filters->python-fu->console
>>> img = gimp.image_list()[0]
>>> img.layers[0].name
'C'
>>> folder = "/tmp/"
>>> for layer in img.layers:
... name = folder + layer.name + ".png"
... pdb.gimp_file_save(img, layer, name, name)
...
>>>
Done! This snippet assumes a single image open in GIMP (it takes the last open
image in the list by default with the [0].)
The call to export an image takes in a drawable (which can be a layer, a mask, or other item) as input, and saves the file guessing the type by the filename extension with the default parameters.
And, of course, you can improve on it by creating a more elaborate naming scheme for the files, and so on. To make this core into an automated working python-fu plug-in, you just have to put it inside a function, in a .py file in your personal plug-in directory. (if on Unix/Mac OS/Linux you have to set it as executable as well) - and make calls to the register and main functions of GIMP-fu, in the same file -
pick an example for here, http://www.ibm.com/developerworks/opensource/library/os-autogimp/ on how to arrange the file and make these calls. (You can go straight to listing 6).