fsl resampled image comes out cropped - image-processing

I am using fsleyes to resample image by providing reference image, but the generated resampled image is cropped ?
the blue is resampled overlay
I tried increasing the resolution of reference image but that doesn't help either.

I translated the image over referenced image before applying resampling it seemed to work.

Related

Using image morphological techniques, locate the broken locations

Can someone please guide the steps/the operation to be performed to construct this image and detect the broken fence position of the Image.
Thresholding the image to a binary image : to convert the input image to a binary image
Inverting the image : inverting it to get a black background and white lines
Dilation with SE one unit of the fence structure
Apply Erosion
Bitwise-and masks together: retrieve the original back- and foreground the image is inverted by subtracting the bitwise_or from 255
Constructed Image - Original Image will give us the position of the broken fence
Will this solution work ?
Depends what you call locate.
After large horizontal erosion and binarization:

PIllow Image.paste v.s composite v.s. alpha_composite v.s. blend, what's the difference?

Newbie in image processing. I'm confused with these methods when merging two images with Pillow:
PIL.Image.Image
.paste()
.composite()
.alpha_composite()
.blend()
Could anyone provide a quick explanation? Or where could I grab the related background knowledge?
I see it like this:
blend is the simplest. It takes a fixed and constant proportion of each image at each pixel location, e.g. 30% of image A and 70% of image B at each location all over the image. The ratio is a single number. This operation is not really interested in transparency, it is more of a weighted average where a part of both input images will be visible at every pixel location in the output image
paste and composite are synonyms. They use a mask, with the same size as the images, and take a proportion of image A and image B according to the value of the mask which may be different at each location. So you might have a 0-100 proportion of image A and image B at the top and 100-0 proportion at the bottom, and this would look like a smoothly blended transition from one image at the top to the other image at the bottom. Or, it may be like a largely opaque foreground where you only see one input image, but a transparent window through which you see the other input image. The mask, of the same size as the two input images, is key here and it can assume different values at different locations.
alpha compositing is the most complicated and is best described by Wikipedia
——-
Put another way, blend is no alpha/transparency channel and a fixed proportion of each input image present throughout the output image.
paste is a single alpha channel that can vary across the image.
alpha_composite is two alpha channels that can both vary across the image.

Finding cropped and scaled similar image

Given several large original images and a small image which is cropped and isotropic-scaled from one of large images, the task is to find where the small image comes from.
cropping usually occur at the center of large image
but exact crop boundary is unknown
the size of small image is about 200x200
again, exact size of small image is unknown
if the size of cropped area is (width, height), the size of small image must be (width * k, height * k), where k < 1.0
I've read some related topics in SO and tried methods like ORB / color histograms, however the accuracy is not acceptable. Would you please give me some advice? Is there any efficient algorithm to deal with this problem? Thank you very much.
The wording you are looking for is template matching, as you want to scan the original image and look for the origin of the cropped and scaled one.
The OpenCV tutorial has an extensive explaination for it

Stretch region of image through opencv or opengl in iOS

I am trying to make double chin in fat image as mentioned in my desired result image below.
I have morphed the normal face to fat face by wrapping an image on mesh and deformed the mesh.
Original image
Wrapped image on mesh grid with vertex points displaced
Current result image
I tried a lot by arranging mesh points but could not get the result like I have shown in first image.
Any ideas how to achieve this by open GL or open CV in iOS?
It's obvious from the first image that there is an added effect to produce the double or triple chin.
This actually looks like a either a preset image blended into the original or a scale and stretched version of the original chin blended into the warped image.

resize png image using rmagick without losing quality

I need to resize an 200*200 image to 60*60 in rmagick without losing image quality. Currently I am doing following for a png image
img = Magick::Image.from_blob(params[:file].read)[0]
img.write(RootPath + params[:dir_str] + "/#{filename}") do
self.quality=100;
# self.compression = Magick::ZipCompression
end
I am losing sharpness in the resulting image. I want to be able to resize by losing the least amount of image quality.
I tried to set it's quality and different compressions, but all of them seems not works fine.
all resulting image are still looks being removed a layer of colors and the word character are losing sharpness
anyone could give me some instructions for resizing png images?
You're resizing a picture from 200x200 = 40,000 down to 60x60 = 3,600 - that is, less than a tenth of the resolution - and you're surprised that you lose image quality? Think of it this way - could you take a 16x16 image and resize it to 5x5 with no loss of quality? That is about the same as you are trying to do here.
If what you are saying you want to do was actually possible, then every picture could be reduced to one pixel with no loss of quality.
With the art designer's 60x60 image being better quality than yours, it depends on the original size of the image that the art designer is working from. For example, if the art designer was working from an 800x800 image and provided your 200x200 image from that, then also reduced the original 800x800 image to 60x60 in PS then that 60x60 image will be better quailty than the one you have. This is because your 60x60 image has gone throuogh two losses of quality: one to go to 200x200 and the second to go from 200x200 to 60x60. Necessarily this will be worse than the image resized from the original.
You could convert the png to a vector image, resize the vector to 60x60 then convert the vector to png. Almost lossless.

Resources