How to alter delay in a gif, without altering its speed - imagemagick

For stupid timing reasons, I need a gif that has delay 6. Alas, my material is delay 20.
What I effectively need is to drop delay to 6, and at the same time multiply every frame three or four times. I don't mind the timing being a little off.
This seems like a simple enough problem, but utterly ungoogelable.

You can do it like this:
#!/bin/bash
# Split animation into constituent frames, saving as "frame-nnn.gif"
convert animated.gif -coalesce frame-%03d.gif
# Make array of all framenames
frames=( frame-*gif )
# Triplicate array elements
for ((i=0;i<${#frames[#]};i++)); do newframes+="${frames[i]} ${frames[i]} ${frames[i]} "; done
# DEBUG echo ${newframes[#]}
# Rebuild animation with new speed
convert -delay 10 ${newframes[#]} new.gif
# Clean up
rm frame-*.gif 2> /dev/null
My script assumes your original is called animated.gif and the result will be called new.gif. Obviously you can change the delays and number of duplicates as you wish, the values I have chosen are illustrative.

Related

Video Morph Between Two Images, FFMPEG/Minterpolate

I am trying to make a quick and easy morph video using two frames (png images) with ffmpeg's minterpolate filter, in a bash script on Ubuntu Linux. The intent is to use the morphs as transitions between similar video in a different video editor later.
It will work on 3+ frames/images, but fails using just 2 frames/images.
First the code that works: 3 frames
This is using three 1080p png files:
test01_01.png
test01_02.png
test01_03.png
input01="test01_%02d.png"
ffmpeg -y -fflags +genpts -r 30 -i $input01 -vf "setpts=100*PTS,minterpolate=fps=24:scd=none" -pix_fmt yuv420p "test01.mp4"
This takes a bit of processing time, then creates a 414kb, roughly three second mp4 video of a morph starting with the first frame, morphing to the second, then morphing to the third.
The code that fails: 2 frames
This is using just two of the same 1080p png files:
test02_01.png
test02_02.png
input01="test02_%02d.png"
ffmpeg -y -fflags +genpts -r 30 -i $input01 -vf "setpts=100*PTS,minterpolate=fps=24:scd=none" -pix_fmt yuv420p "test02.mp4"
This almost immediately creates a 262 byte corrupt mp4 file. There are no differences except the number of frames.
Things I've tried:
I have tried this with the Ubuntu default repo version of ffmpeg, and the static 64bit 5.0 and git-20220108-amd64 versions, all with the same result.
I have also tried with a 2-frame mp4 file as the input, with the same result.
Thoughts?
Is this a bug in ffmpeg or am I doing something wrong?
I am also open to any suggestions for creating a morph like this using other Linux-compatible software.
Thank you for any insight!
It is not documented, but it looks like minterpolate filter requires at least 3 input frames.
We may create a longer video using 5 input frames, and keep the relevant part.
For getting the same output as applying Minterpolate filter with only two input images, we may use the following solution:
Define two input streams:
Set test02_01.png as the first input and test02_02.png as the second input.
Loop each image at least twice, using -stream_loop
(test02_01.png is repeated twice and test02_02.png is repeated 3 times).
Set the input frame rate to 0.3 fps (it is equivalent to -r 30 and setpts=100*PTS).
The input arguments are as follows: -r 0.3 -stream_loop 1 -i test02_01.png -r 0.3 -stream_loop 2 -i test02_02.png.
Concatenate the two input streams using concat filter.
Apply minterpolate filer to the concatenated output.
The output of the above stage is a video with few redundant seconds at the beginning, and few redundant seconds at the end.
Apply trim filter for keeping the relevant part.
Add setpts=PTS-STARTPTS at the end (as recommended when using trim filter).
Suggested command:
ffmpeg -y -r 0.3 -stream_loop 1 -i test02_01.png -r 0.3 -stream_loop 2 -i test02_02.png -filter_complex "[0][1]concat=n=2:v=1:a=0[v];[v]minterpolate=fps=24:scd=none,trim=3:7,setpts=PTS-STARTPTS" -pix_fmt yuv420p test02.mp4
Sample output (as animate GIF):
test02_01.png:
test02_02.png:

Turtle - What precisely is the turtle's speed? X actions/second?

A student asked me this and I can't find the answer. You can set the turtle's speed to 0-10. But what does that actually mean? x actions / second?
We are on Code.org, which translates its code in the lessons into Javascript, but this command is found in the Play lab, which provides no translation. I am assuming this is analogous to JS-turtle, but if you know the answer for Python Turtle, etc, I'd love to hear it.
What precisely is the turtle's speed? X actions/second? ... if you
know the answer for Python Turtle, etc, I'd love to hear it.
In standard Python, the turtle's speed() method indirectly controls the speed of the turtle by dividing up the turtle's motion into smaller or larger steps, where each step has a defined delay.
By default, if we don't mess with setworldcoordinates(), or change the default screen update delay using delay(), or tracer(), then the motion of a turtle is broken up into a number of individual steps determined by:
steps = int(distance / (3 * 1.1**speed * speed))
At the default speed (3 or 'slow'), a 100px line would be drawn in 8 steps. At the slowest speed (1 or 'slowest'), 30 steps. At a fast speed (10 or 'fast'), 1 step. (Oddly, the default speed isn't the 'normal' (6) speed!) Each step incurs a screen update delay of 10ms by default.
Using a speed of 0 ('fastest'), or turning off tracer(), avoids this process altogether and just draws lines in one step.
There's a similar logic for how the speed() setting affects the number of steps the turtle takes to rotate when you do right() or left().
https://docs.python.org/3/library/turtle.html#turtle.speed
From the docs you can see that it is just an arbitrary value.

What happens in the GPU between the call to gl.drawArrays() to g.readPixels?

Changing the Title in the hopes of being more accurate.
We have some code which runs several programs in succession by calling drawArrays() . The output textures from each stage are fed into the next and so on.
After the final call to draw, a call to readPixels() is made.
This call takes an enormous amount of time (for an output of < 1000 floats). I have measured a readPixels of that size in isolation which takes 1 or 2 ms. However in our case we see a delay of about 1500ms.
So we conjectured that the actual computation must have not started until we called readPixels(). To test this theory and to force the computation, we placed a call to gl.flush() after each gl.drawxx(). This made no difference.
So we replaced that with a call to gl.finish(). Again no difference. We finally replaced it with a call to getError(). Still no difference.
Can we conclude that gpu actually does not draw anything unless the framebuffer is read from? Can we force it to do so?

Apples Automator to make JPEG, asking for compression level and dimensions

This is a followup to Apple's Automator: compression settings for jpg?
It works. However I am failing at modifying it to make it more flexible.
I am incorporating Sips into Automator to try to create a droplet that changes an image file to a jpeg, of a particular quality and dimensions. The automator app asks for the compression level and pixel width, then spits out the requested file. Except... mine doesn't. The scripting (my lack of programming knowledge) is my weak link.
This is what I've done that's not working... Please see:
There are two mistakes in the code that the poster who wrote the code made.
When calling a variable in shell. You must prepend it with "$"
so where the have missed this out is what is stopping the code to work as it should.
The lines without the $ are: compressionLevel=file
and
sips -s format jpeg -s formatOptions compressionLevel $file --out ${filename%.*}.jpg
The corrected code:
should be:
compressionLevel=$file
and
sips -s format jpeg -s formatOptions $compressionLevel $file --out ${filename%.*}.jpg
UPDATED ANSWER* I noticed you have pixel width.
So I have change the code to accommodate it.
I have also added a "_" to the end of the out put file which you can remove if you want.
The reason I put it there is so I do not overwrite originals and create in effect copies.
compressionLevel=$1
pixalWidth=$2
i=1 # index of item
for item # A for loop by default loop through $1, $2, ...
do
if [ $i -gt 2 ]; then # start at index 3 #-- array indexes start at 0. 0 is just a "-" in this case so we totally ignor it. we are using items 1 & 2 for the sip options, the rest for file paths. the index "i" is used to keep track of the array item indexes.
echo "Processing $item"
sips -s format jpeg -s formatOptions $compressionLevel --resampleWidth $pixalWidth $item --out ${item%.*}_.jpg
fi
((i++))
done
osascript -e 'tell app "Automator" to display dialog "Done." buttons {"OK"}'
I would suggest you do some reading on shell scripting to get some basics down.
there are plant of references on the web. And Apple have this.
I am sure if you ask the question others can give you some good starting points first search this site for similar question as I am sure it base been asked a thousand times.

Apple's Automator: compression settings for jpg?

When I run Apple's Automator to simply cut a bunch of images in their size Automator will also reduce the quality of the files (jpg) and they get blurry.
How can I prevent this? Are there settings that I can take control of?
Edit:
Or are there any other tools that do the same job but without affecting the image quality?
If you want to have finer control over the amount of JPEG compression, as kopischke said you'll have to use the sips utility, which can be used in a shell script. Here's how you would do that in Automator:
First get the files and the compression setting:
The Ask for Text action should not accept any input (right-click on it, select "Ignore Input").
Make sure that the first Get Value of Variable action is not accepting any input (right-click on them, select "Ignore Input"), and that the second Get Value of Variable takes the input from the first. This creates an array that is then passed on to the shell script. The first item in the array is the compression level that was given to the Automator Script. The second is the list of files that the script will do the sips command on.
In the options on the top of the Run Shell Script action, select "/bin/bash" as the Shell and select "as arguments" for Pass Input. Then paste this code:
itemNumber=0
compressionLevel=0
for file in "$#"
do
if [ "$itemNumber" = "0" ]; then
compressionLevel=$file
else
echo "Processing $file"
filename="$file"
sips -s format jpeg -s formatOptions $compressionLevel "$file" --out "${filename%.*}.jpg"
fi
((itemNumber=itemNumber+1))
done
((itemNumber=itemNumber-1))
osascript -e "tell app \"Automator\" to display dialog \"${itemNumber} Files Converted\" buttons {\"OK\"}"
If you click on Results at the bottom, it'll tell you what file it's currently working on. Have fun compressing!
Automator’s “Crop Images” and “Scale Images” actions have no quality settings – as is often the case with Automator, simplicity trumps configurability. However, there is another way to access CoreImage’s image manipulation facilities whithout resorting to Cocoa programming: the Scriptable Image Processing System, which makes image processing functions available to
the shell via the sips utility. You can fiddle with the most minute settings using this, but as it is a bit arcane in handling, you might be better served with the second way,
AppleScript via Image Events, a scriptable faceless background application provided by OS X. There are crop and scale commands, and the option of specifying a compression level when saving as a JPEG with
save <image> as JPEG with compression level (low|medium|high)
Use a “Run AppleScript” action instead of your “Crop” / “Scale” one and wrap the Image Events commands in a tell application "Image Events" block, and you should be set. For instance, to scale the image to half its size and save as a JPEG in best quality, overwriting the original:
on run {input, parameters}
set output to {}
repeat with aPath in input
tell application "Image Events"
set aPicture to open aPath
try
scale aPicture by factor 0.5
set end of output to save aPicture as JPEG with compression level low
on error errorMessage
log errorMessage
end try
close aPicture
end tell
end repeat
return output -- next action processes edited files.
end run
– for other scales, adjust the factor accordingly (1 = 100 %, .5 = 50 %, .25 = 25 % etc.); for a crop, replace the scale aPicture by factor X by crop aPicture to {width, height}. Mac OS X Automation has good tutorials on the usage of both scale and crop.
Eric's code is just brilliant. Can get most of the jobs done.
but if the image's filename contains space, this workflow will not work.(due to space will break the shell script when processing sips.)
There is a simple solution for this: add "Rename Finder Item" in this workflow.
replace spaces with "_" or anything you like.
then, it's good to go.
Comment from '20
I changed the script into a quick action, without any prompts (for compression as well as confirmation). It duplicates the file and renames the original version to _original. I also included nyam's solution for the 'space' problem.
You can download the workflow file here: http://mobilejournalism.blog/files/Compress%2080%20percent.workflow.zip (file is zipped, because otherwise it will be recognized as a folder instead of workflow file)
Hopefully this is useful for anyone searching for a solution like this (like I did an hour ago).
Comment from '17
To avoid "space" problem, it's smarter to change IFS than renaming.
Back up current IFS and change it to \n only. And restore original IFS after the processing loop.
ORG_IFS=$IFS
IFS=$'\n'
for file in $#
do
...
done
IFS=$ORG_IFS

Resources