Manipulating .gif animation with wand - imagemagick

I'm working with wand to basically do a frame-by-frame translation of a gif. I want to transpose an image into each frame of a gif, then save the output animated gif.
def placeImage(gif_name, image_name, save_location):
with Image(filename = gif_name) as gif:
with Image(filename = image_name) as image:
new_frames = []
for frame_orig in gif_new.sequence:
frame = frame_orig.clone()
with Drawing() as draw:
draw.composite(operator='src_over', left=20, top=20,
width=image.width, height=image.height, image=image)
draw(frame)
new_frames.append(frame)
So now I've got all of these frames (though they're not SingleImage objects, but Image objects) that I'd like to put back together and generate a new gif. Any advice?
I'd like to be able to do something like:
with gif.clone() as output:
output.sequence=new_frames
output.save(filename=save_location)

I just answered something very similar here: Resizing GIFs with Wand + ImageMagick
with Image() as dst_image:
with Image(filename=src_path) as src_image:
for frame in src_image.sequence:
frame.resize(x, y)
dst_image.sequence.append(frame)
dst_image.save(filename=dst_path)
Hope that helps!

Related

Magick++: how do i use magick++ to convert an animated gif to an animated webp?

The main logic code is as follows:
std::vector<Magick::Image> images;
Magick::readImages(&images, input_blob);
for (auto &image : images) {
image.magick("WEBP");
}
output_blob = new Magick::Blob;
Magick::writeImages(images.begin(), images.end(), output_blob);
When i wrote output_blob data into a file, i got a static webp image.
Q: How can i get an animated webp file?
Thanks in advance!

How can i align in D435 by using opencv?

Intel gives example to align RGB camera and Depth(IR) camera, but i need adapt to opencv.
the example show only "align in render", so i can't use them because i will use below function.
Mat color(Size(640, 480), CV_8UC3, (void*)color_frame.get_data(), Mat::AUTO_STEP);
i need align frame like below sentences form.
rs2::frameset frames = pipe.wait_for_frames();
rs2::frame color_frame = frames.get_color_frame();
rs2::frame depth_frame = color_map(frames.get_depth_frame());
So, is it possible that i can use align function like below ex) sentence?
ex) rs2:: frameset align_frame = ...... allocate_composite_frame......
OR, is there any function like "depth_aligned_to_color" in D435 like SR300?
The frameset will be returned after align function is called, you can retrieve back the aligned color/depth image to set into CV function without problem. Please download the latest version LibRS and check the rs-align.cpp, attach code snippet below.
//Get processed aligned frame
auto processed = align.process(frameset);
// Trying to get both other and aligned depth frames
rs2::video_frame other_frame = processed.first(align_to);
rs2::depth_frame aligned_depth_frame = processed.get_depth_frame();

Loss of data when extracting frames from GIF to PNG?

When I try to use fraxel's answer on
http://stackoverflow.com/questions/10269099/pil-convert-gif-frames-to-jpg
on the image http://24.media.tumblr.com/fffcc2d8e980fbba4f87d51ed4916b87/tumblr_mh8uaqMo2I1rkp3avo2_250.gif
I get ok data for some, but then for some I get missing data it looks like, e.g.
Correct
Missing
To display these I use imagemagick's display foo* and then use space to move through the images ... is it possible imagemagick is reading them wrong?
Edit:
Even when using convert and then displaying via display foo* I get the following
Could this be a characteristic of the gif then?
If you can stick to ImageMagick then it is very simple to solve this:
convert input.gif -coalesce output.png
Otherwise, you will have to consider the different forms of how each GIF frame can be constructed. For this specific type of GIF, and also the other one shown in your other question, the following code works (note that in your earlier question, the accepted answer doesn't actually make all the split parts transparent -- at least with the latest released PIL):
import sys
from PIL import Image, ImageSequence
img = Image.open(sys.argv[1])
pal = img.getpalette()
prev = img.convert('RGBA')
prev_dispose = True
for i, frame in enumerate(ImageSequence.Iterator(img)):
dispose = frame.dispose
if frame.tile:
x0, y0, x1, y1 = frame.tile[0][1]
if not frame.palette.dirty:
frame.putpalette(pal)
frame = frame.crop((x0, y0, x1, y1))
bbox = (x0, y0, x1, y1)
else:
bbox = None
if dispose is None:
prev.paste(frame, bbox, frame.convert('RGBA'))
prev.save('foo%02d.png' % i)
prev_dispose = False
else:
if prev_dispose:
prev = Image.new('RGBA', img.size, (0, 0, 0, 0))
out = prev.copy()
out.paste(frame, bbox, frame.convert('RGBA'))
out.save('foo%02d.png' % i)
Ultimately you will have to recreate what -coalesce does, since it is likely that the code above may not work with certain GIF images.
You should try keeping the whole history of frames in "background", instead of :
background = Image.new("RGB", size, (255,255,255))
background.paste( lastframe )
background.paste( im2 )
Just create the "background" once before the loop, then only paste() frame on it, it should work.

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Converting numpy array having image data to CvMat

I have an image in a numpy array which I save using savefig and then use opencv loadImage function to load the image to a CvMat. But I want to remove this saving the image step.
My Numpy Image size is 25x21, and if I use fromArray function like
im = cv.fromarray(asarray(img))
I get a CvMat of size 25x21 which is very small. But When I save the image to png format and load it back using LoadImage, I get the full sized image of size 429x509.
Can somebody please tell me how do I get this full sized image from numpy array to CvMat? Can I convert the image from numpy array to a png format in code without saving it using savefig()?
This is what I am doing right now.
imgFigure = imshow(zeros((gridM,gridN)),cmap=cm.gray,vmin=VMIN,vmax=5,animated=True,interpolation='nearest',extent=[xmin,xmax,ymin,ymax])
imgFigure.set_data(reshape(img,(gridM,gridN)))
draw()
fileName = '1p_'
fileName += str(counter)
fileName += ".png"
savefig(fileName,bbox_inches='tight',pad_inches=0.01,facecolor='black')
The size of img above is 525 and gridM and gridN are 25 and 21.Then I load this image using:
img = cv.LoadImage(fileName, cv.CV_LOAD_IMAGE_GRAYSCALE)
Now img size is 429x509.
You can just use cv.fromarray() directly upon your numpy array with no need to save inbetween:
import cv
import numpy as np
a = np.arange(0,255,0.0255).reshape(50,200)
b = cv.fromarray(a)
cv.SaveImage('saved.png', b)
print b
#Output:
<cvmat(type=42424006 64FC1 rows=50 cols=200 step=1600 )>
The numpy array becomes a cvmat, and the size is unchanged. This is the saved image:

Resources