Loss of data when extracting frames from GIF to PNG? - image-processing

When I try to use fraxel's answer on
http://stackoverflow.com/questions/10269099/pil-convert-gif-frames-to-jpg
on the image http://24.media.tumblr.com/fffcc2d8e980fbba4f87d51ed4916b87/tumblr_mh8uaqMo2I1rkp3avo2_250.gif
I get ok data for some, but then for some I get missing data it looks like, e.g.
Correct
Missing
To display these I use imagemagick's display foo* and then use space to move through the images ... is it possible imagemagick is reading them wrong?
Edit:
Even when using convert and then displaying via display foo* I get the following
Could this be a characteristic of the gif then?

If you can stick to ImageMagick then it is very simple to solve this:
convert input.gif -coalesce output.png
Otherwise, you will have to consider the different forms of how each GIF frame can be constructed. For this specific type of GIF, and also the other one shown in your other question, the following code works (note that in your earlier question, the accepted answer doesn't actually make all the split parts transparent -- at least with the latest released PIL):
import sys
from PIL import Image, ImageSequence
img = Image.open(sys.argv[1])
pal = img.getpalette()
prev = img.convert('RGBA')
prev_dispose = True
for i, frame in enumerate(ImageSequence.Iterator(img)):
dispose = frame.dispose
if frame.tile:
x0, y0, x1, y1 = frame.tile[0][1]
if not frame.palette.dirty:
frame.putpalette(pal)
frame = frame.crop((x0, y0, x1, y1))
bbox = (x0, y0, x1, y1)
else:
bbox = None
if dispose is None:
prev.paste(frame, bbox, frame.convert('RGBA'))
prev.save('foo%02d.png' % i)
prev_dispose = False
else:
if prev_dispose:
prev = Image.new('RGBA', img.size, (0, 0, 0, 0))
out = prev.copy()
out.paste(frame, bbox, frame.convert('RGBA'))
out.save('foo%02d.png' % i)
Ultimately you will have to recreate what -coalesce does, since it is likely that the code above may not work with certain GIF images.

You should try keeping the whole history of frames in "background", instead of :
background = Image.new("RGB", size, (255,255,255))
background.paste( lastframe )
background.paste( im2 )
Just create the "background" once before the loop, then only paste() frame on it, it should work.

Related

How to rotate a non-squared image in frequency domain

I want to rotate an image in frequency domain. Inspired in the answers in Image rotation and scaling the frequency domain? I managed to rotate square images. (See the following python script using OpenCV)
M = cv2.imread("lenna.png")
M=np.float32(M)
hanning=cv2.createHanningWindow((M.shape[1],M.shape[0]),cv2.CV_32F)
M=hanning*M
sM = fftshift(M)
rotation_center=(M.shape[1]/2,M.shape[0]/2)
rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,rot_matrix,(FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))
This works fine with squared images. (Better results could be achieved by padding the image)
However, when only using a non-squared portion of the image, the rotation in frequency domain shows some kind of shearing effect.
Any idea on how to achieve this? Obivously I could pad the image to make it square, however the final purpose of all this is to rotate FFTs as fast as possible for an iterative image registration algorithm and this would slightly slow down the algorithm.
Following the suggestion of #CrisLuengo I found the affine transform needed to avoid padding the image. Obviously it will depend on the image size and the application but for my case avoidding the padding is very interesting.
The modified script looks now like:
#rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
kx=1.0
ky=1.0
if(M.shape[0]>M.shape[1]):
kx= float(M.shape[0]) / M.shape[1]
else:
ky=float(M.shape[1])/M.shape[0]
affine_transform = np.zeros([2, 3])
affine_transform[0, 0] = np.cos(angle)
affine_transform[0, 1] = np.sin(angle)*ky/kx
affine_transform[0, 2] = (1-np.cos(angle))*rotation_center[0]-ky/kx*np.sin(angle)*rotation_center[1]
affine_transform[1, 0] = -np.sin(angle)*kx/ky
affine_transform[1, 1] = np.cos(angle)
affine_transform[1, 2] = kx/ky*np.sin(angle)*rotation_center[0]+(1-np.cos(angle))*rotation_center[1]
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,affine_transform, (FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))

i wanted to detect objects in a hsv image. but i keep getting an error,,Expected Ptr<cv::UMat> for argument '%s'

i was trying to create a trackbar window and get hsv value of the image by adjusting the trackbar. created a mask and then adjusted the trackbar to detect an object of the hsv image
enter code here
def nothing(x):
pass
cv.namedWindow("Tracking")
cv.createTrackbar("LH","Tracking",0,255,nothing)
cv.createTrackbar("LS","Tracking",0,255,nothing)
cv.createTrackbar("LV","Tracking",0,255,nothing)
cv.createTrackbar("UH","Tracking",255,255,nothing)
cv.createTrackbar("US","Tracking",255,255,nothing)
cv.createTrackbar("UV","Tracking",255,255,nothing)
while True:
frame = cv.imread("C:/Users/acer/Desktop/insects/New folder/ins.jpg")
hsv = cv.cvtColor(frame,cv.COLOR_BGR2HSV)
l_h = cv.getTrackbarPos("LH","Tracking")
l_s = cv.getTrackbarPos("LS","Tracking")
l_v = cv.getTrackbarPos("LV","Tracking")
u_h = cv.getTrackbarPos("UH","Tracking")
u_s = cv.getTrackbarPos("US","Tracking")
u_v = cv.getTrackbarPos("UV","Tracking")
l_b = np.array([l_h,l_s,l_v])
u_b = np.array([u_h,u_s,u_v])
mask = (hsv,l_b,u_b)
res = cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("frame",frame)
cv.imshow("mask",mask)
cv.imshow("res",res)
key = cv.waitKey(1)
if key == 27:
break
cv.destroyAllWindows()
There are a few issues with your code:
1) You have no import statements. You need at least:
import cv2 as cv
import numpy as np
2) Your indentation is incorrect. Your function nothing() should not be indented.
3) You omitted to call inRange(), you need:
mask = cv.inRange(hsv,l_b,u_b)
4) You have scaled the Hue into the range 0..255 when it actually has the range 0..180 when used with uint8 images so that 360 degrees comes out as 180 degrees which is less than the 255 upper limit of uint8.
By the way, it is fairly poor practice to do "loop invariant" stuff inside a loop - I mean the part where you hit the disk every millisecond and re-read the image, re-decode the JPEG and convert it to HSV. All that can be done outside the loop, then inside it, just do a quick memory copy of the HSV image.

vips - How to achieve edge feather effect

I'm using the vips library for manipulating some images, specifically its Lua binding, lua-vips, and I'm trying to find a way to do a feather effect on the edge of an image.
It's the first time I try a library for this kind of task and I've been looking at this list of functions available, but still no idea on how to it. It's not complex shape, just a basic rectangular image whose top and bottom edges should blend smoothly with the background (another image that I'm currently using vips_composite() on).
Supposing that a "feather_edges" method existed, it would be something like:
local bg = vips.Image.new_from_file("foo.png")
local img = vips.Image.new_from_file("bar.png") --smaller than `bg`
img = img:feather_edges(6) --imagine a 6px feather
bg:composite(img, 'over')
But still it would be nice to specify what parts of the image should be feathered. Any ideas on how to do it?
You need to pull the alpha out of the top image, mask off the edges with a black border, blur the alpha to feather the edges, reattach, then compose.
Something like:
#!/usr/bin/luajit
vips = require 'vips'
function feather_edges(image, sigma)
-- split to alpha + image data
local alpha = image:extract_band(image:bands() - 1)
local image = image:extract_band(0, {n = image:bands() - 1})
-- we need to place a black border on the alpha we can then feather into,
-- and scale this border with sigma
local margin = sigma * 2
alpha = alpha
:crop(margin, margin,
image:width() - 2 * margin, image:height() - 2 * margin)
:embed(margin, margin, image:width(), image:height())
:gaussblur(sigma)
-- and reattach
return image:bandjoin(alpha)
end
bg = vips.Image.new_from_file(arg[1], {access = "sequential"})
fg = vips.Image.new_from_file(arg[2], {access = "sequential"})
fg = feather_edges(fg, 10)
out = bg:composite(fg, "over", {x = 100, y = 100})
out:write_to_file(arg[3])
As jcupitt said, we need to pull the alpha band from the image, blur it, join it again and composite it with the background, but using the function as it was, left a thin black border around the foreground image.
To overcome that, we need to copy the image, resize it according to the sigma parameter, extract the alpha band from the reduced copy, blur it, and replace the alpha band of the original image with it. Like this, the border of the original image will be completely covered by the transparent parts of the alpha.
local function featherEdges(img, sigma)
local copy = img:copy()
:resize(1, { vscale = (img:height() - sigma * 2) / img:height() })
:embed(0, sigma, img:width(), img:height())
local alpha = copy
:extract_band(copy:bands() - 1)
:gaussblur(sigma)
return img
:extract_band(0, { n = img:bands() - 1 })
:bandjoin(alpha)
end

How can i align in D435 by using opencv?

Intel gives example to align RGB camera and Depth(IR) camera, but i need adapt to opencv.
the example show only "align in render", so i can't use them because i will use below function.
Mat color(Size(640, 480), CV_8UC3, (void*)color_frame.get_data(), Mat::AUTO_STEP);
i need align frame like below sentences form.
rs2::frameset frames = pipe.wait_for_frames();
rs2::frame color_frame = frames.get_color_frame();
rs2::frame depth_frame = color_map(frames.get_depth_frame());
So, is it possible that i can use align function like below ex) sentence?
ex) rs2:: frameset align_frame = ...... allocate_composite_frame......
OR, is there any function like "depth_aligned_to_color" in D435 like SR300?
The frameset will be returned after align function is called, you can retrieve back the aligned color/depth image to set into CV function without problem. Please download the latest version LibRS and check the rs-align.cpp, attach code snippet below.
//Get processed aligned frame
auto processed = align.process(frameset);
// Trying to get both other and aligned depth frames
rs2::video_frame other_frame = processed.first(align_to);
rs2::depth_frame aligned_depth_frame = processed.get_depth_frame();

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Resources