Go Resizing Images - image-processing

I am using the Go resize package here: https://github.com/nfnt/resize
I am pulling an Image from S3, as such:
image_data, err := mybucket.Get(key)
// this gives me data []byte
After that, I need to resize the image:
new_image := resize.Resize(160, 0, original_image, resize.Lanczos3)
// problem is that the original_image has to be of type image.Image
Upload the image to my S3 bucket
err : = mybucket.Put('newpath', new_image, 'image/jpg', 'aclstring')
// problem is that new image needs to be data []byte
How do I transform a data []byte to ---> image.Image and back to ----> data []byte?

Read http://golang.org/pkg/image
// you need the image package, and a format package for encoding/decoding
import (
"bytes"
"image"
"image/jpeg" // if you don't need to use jpeg.Encode, use this line instead
// _ "image/jpeg"
"github.com/nfnt/resize"
)
// Decoding gives you an Image.
// If you have an io.Reader already, you can give that to Decode
// without reading it into a []byte.
image, _, err := image.Decode(bytes.NewReader(data))
// check err
newImage := resize.Resize(160, 0, original_image, resize.Lanczos3)
// Encode uses a Writer, use a Buffer if you need the raw []byte
err = jpeg.Encode(someWriter, newImage, nil)
// check err

The OP is using a specific library/package, but I think that the issue of "Go Resizing Images" can be solved without that package.
You can resize de image using golang.org/x/image/draw:
input, _ := os.Open("your_image.png")
defer input.Close()
output, _ := os.Create("your_image_resized.png")
defer output.Close()
// Decode the image (from PNG to image.Image):
src, _ := png.Decode(input)
// Set the expected size that you want:
dst := image.NewRGBA(image.Rect(0, 0, src.Bounds().Max.X/2, src.Bounds().Max.Y/2))
// Resize:
draw.NearestNeighbor.Scale(dst, dst.Rect, src, src.Bounds(), draw.Over, nil)
// Encode to `output`:
png.Encode(output, dst)
In that case I choose draw.NearestNeighbor, because it's faster, but looks worse. but there's other methods, you can see on https://pkg.go.dev/golang.org/x/image/draw#pkg-variables:
draw.NearestNeighbor
NearestNeighbor is the nearest neighbor interpolator. It is very fast, but usually gives very low quality results. When scaling up, the result will look 'blocky'.
draw.ApproxBiLinear
ApproxBiLinear is a mixture of the nearest neighbor and bi-linear interpolators. It is fast, but usually gives medium quality results.
draw.BiLinear
BiLinear is the tent kernel. It is slow, but usually gives high quality results.
draw.CatmullRom
CatmullRom is the Catmull-Rom kernel. It is very slow, but usually gives very high quality results.

Want to do it 29 times faster? Try amazing vipsthumbnail instead:
sudo apt-get install libvips-tools
vipsthumbnail --help-all
This will resize and nicely crop saving result to a file:
vipsthumbnail original.jpg -s 700x200 -o 700x200.jpg -c
Calling from Go:
func resizeExternally(from string, to string, width uint, height uint) error {
var args = []string{
"--size", strconv.FormatUint(uint64(width), 10) + "x" +
strconv.FormatUint(uint64(height), 10),
"--output", to,
"--crop",
from,
}
path, err := exec.LookPath("vipsthumbnail")
if err != nil {
return err
}
cmd := exec.Command(path, args...)
return cmd.Run()
}

You could use bimg, which is powered by libvips (a fast image processing library written in C).
If you are looking for a image resizing solution as a service, take a look to imaginary

Related

OpenCV frame blending only results in blue

I'm trying to average every 30 frames of a video to create a blurred timelapse. I got the video reading and video writing working, but something is wrong, because I'm only seeing the blue channel! (or one channel that is being written to blue).
Any ideas? Or better ways to do this? I'm new to OpenCV. The code is in Kotlin, but I think it should be the same issue if this was Java or python or whatever.
val videoCapture = VideoCapture(parsedArgs.inputFile)
val frameSize = Size(
videoCapture.get(Videoio.CV_CAP_PROP_FRAME_WIDTH),
videoCapture.get(Videoio.CV_CAP_PROP_FRAME_HEIGHT))
val fps = videoCapture.get(Videoio.CAP_PROP_FPS)
val videoWriter = VideoWriter( parsedArgs.outputFile, VideoWriter.fourcc('M', 'J', 'P', 'G'), fps, frameSize)
val image = Mat(frameSize,CV_8UC3)
val blended = Mat(frameSize,CV_64FC3)
println("Size: $frameSize fps:$fps over $frameCount frames")
try {
while (videoCapture.read(image)) {
val frameNumber = videoCapture.get(Videoio.CAP_PROP_POS_FRAMES).toInt()
Core.flip(image, image, -1) // I shot the video upside down
Imgproc.accumulate(image,blended)
if(frameNumber>0 && frameNumber%parsedArgs.windowSize==0) {
Core.multiply(blended, Scalar(1.0/parsedArgs.windowSize), blended)
blended.convertTo(image, CV_8UC3);
videoWriter.write(image)
blended.setTo(Scalar(0.0,0.0,0.0))
println(frameNumber.toDouble()/frameCount)
}
}
} finally {
videoCapture.release()
videoWriter.release()
}
Martin Beckett led me to the right answer (thank you!). I was multiplying by a Scalar(double), which should have been my hint because I wasn't multiplying by plain-double.
It expected a Scalar, with a value for each channel so it was happily multiplying my first channel by double, and the rest by 0.
Imgproc.accumulate(image, blended64)
if (frameNumber > 0 && frameNumber % parsedArgs.windowSize == 0) {
val blendDivisor = 1.0 / parsedArgs.windowSize
Core.multiply(blended64, Scalar(blendDivisor, blendDivisor, blendDivisor), blended64)
My guess would be using different types in Imgproc.accumulate(image,blended) try converting image to match blended before combining them.
If it was writing the entire 8bit*3 pixel data into one float the first field in an openCV image is blue (it uses BGR order)

Node-gm circular image crop using Imagemagick

I've been trying to use node-gm + Imagemagick to circular crop an image.
Anyways, here's my attempt at creating a mask using a black circle.
var original = 'app-server/photo.jpg';
var output = 'app-server/photo.png';
var maskPath = 'app-server/photo-mask.png';
gm(original)
.crop(233, 233,29,26)
.resize(80, 80)
.setFormat('png')
.write(output, function (err) {
console.log(err || 'cropped to target size');
gm(output)
.out('-size', '80x80')
.background('black')
.drawCircle(20,20, 0, 0)
.toBuffer('PNG',function (err, buffer) {
console.log(err || 'created circular black mask');
//docs say "a buffer can be passed instead of
//a filepath" but this is apparently false
//and say something unclear about using black/white colors for masking.
//I'm clearly lost
gm(output)
.mask(maskPath)
.write(output, function (err) {
console.log(err || 'applied circular black mask to image');
});
});
});
I'm sure this can be done via some fancy string command concatenation, but despite my lack of image processing prowess, I still want to keep the code clean. I'm really looking for a solution using node-gm functions, preferably with less operations than my attempt (also preferably something that works, unlike mine).
I also tried to chain out the function calls for this command with no success:
https://stackoverflow.com/a/999563/1267778
Note I need to crop at a specific location (w,h,x,y) so these solutions also don't work for me:
node-pngjs
node-circle-image
Got it! After many hours of fiddling, I got exactly what I needed.
gm(originalFilePath)
.crop(233, 233,29,26)
.resize(size, size)
.write(outputFilePath, function(err) {
gm(size, size, 'none')
.fill(outputFilePath)
.drawCircle(size/2,size/2, size/2, 0)
.write(output, function(err) {
console.log(err || 'done');
});
});
I'm using JCrop to allow the user to crop the image on the front-end and pass the coordinates (w,h,x,y) into crop().

Receieved msg_Image getting distored while displaying in openCv

I have published an image from one node and then i want to subscribe that image in my second node. But after subscribing it in the second node, when i try to store it in cv::Mat image then, it get distorted.
The patchImage in the following code is distored. there are some horizontal lines and four images of the same image merged.
An overview of my code is following.
first_node_publisher
{
im.header.stamp = time;
im.width = width;
im.height = height;
im.step = 3*width;
im.encoding = "rgb8";
image_pub.publish(im);
}
second_node_imageCallBack(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat patchImage;
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8); //
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
}
patchImage=cv_ptr->image;
imshow("Received Image", patchImage); //This patchImage is distored
}
I believe the problem is with your encoding setting, are you sure the encoding is actually rgb8? That is unlikely because OpenCV stores images by default in the BGR format (such as CV_8UC3). It is also possible that your images are actually not even stored as unsigned characters, but shorts, floats, doubles, etc.
I always include assert(image.type==CV_8UC3) in my publishers to make sure the encoding is correct

Loss of data when extracting frames from GIF to PNG?

When I try to use fraxel's answer on
http://stackoverflow.com/questions/10269099/pil-convert-gif-frames-to-jpg
on the image http://24.media.tumblr.com/fffcc2d8e980fbba4f87d51ed4916b87/tumblr_mh8uaqMo2I1rkp3avo2_250.gif
I get ok data for some, but then for some I get missing data it looks like, e.g.
Correct
Missing
To display these I use imagemagick's display foo* and then use space to move through the images ... is it possible imagemagick is reading them wrong?
Edit:
Even when using convert and then displaying via display foo* I get the following
Could this be a characteristic of the gif then?
If you can stick to ImageMagick then it is very simple to solve this:
convert input.gif -coalesce output.png
Otherwise, you will have to consider the different forms of how each GIF frame can be constructed. For this specific type of GIF, and also the other one shown in your other question, the following code works (note that in your earlier question, the accepted answer doesn't actually make all the split parts transparent -- at least with the latest released PIL):
import sys
from PIL import Image, ImageSequence
img = Image.open(sys.argv[1])
pal = img.getpalette()
prev = img.convert('RGBA')
prev_dispose = True
for i, frame in enumerate(ImageSequence.Iterator(img)):
dispose = frame.dispose
if frame.tile:
x0, y0, x1, y1 = frame.tile[0][1]
if not frame.palette.dirty:
frame.putpalette(pal)
frame = frame.crop((x0, y0, x1, y1))
bbox = (x0, y0, x1, y1)
else:
bbox = None
if dispose is None:
prev.paste(frame, bbox, frame.convert('RGBA'))
prev.save('foo%02d.png' % i)
prev_dispose = False
else:
if prev_dispose:
prev = Image.new('RGBA', img.size, (0, 0, 0, 0))
out = prev.copy()
out.paste(frame, bbox, frame.convert('RGBA'))
out.save('foo%02d.png' % i)
Ultimately you will have to recreate what -coalesce does, since it is likely that the code above may not work with certain GIF images.
You should try keeping the whole history of frames in "background", instead of :
background = Image.new("RGB", size, (255,255,255))
background.paste( lastframe )
background.paste( im2 )
Just create the "background" once before the loop, then only paste() frame on it, it should work.

Image transparency darkened when saved using OpenCv

I created a drawing application where I allow the user to draw and save the image to later reload to continue drawing. Essentially, I'm passing the drawing as a bitmap to the JNI layer to be saved and the same to load a previous drawing.
I'm using OpenCv to write and read to png file.
I'm noticing something weird in terms of the transparencies of the image. It almost seems as the transparency is being calculated against a black color on OpenCv? Take a look a the images attached, the contain lines that have transparencies.
Correct transparency by passing int array to native code, no color conversion needed:
Darkened transparency by passing Bitmap object to native code, color conversion needed:
What could potentially be happening?
Saving image using native Bitmap get pixel methods:
if ((error = AndroidBitmap_getInfo(pEnv, jbitmap, &info)) < 0) {
LOGE("AndroidBitmap_getInfo() failed! error:%d",error);
}
if (0 == error)
{
if ((error = AndroidBitmap_lockPixels(pEnv, jbitmap, &pixels)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", error);
}
}
if (0 == error)
{
if (info.format == ANDROID_BITMAP_FORMAT_RGBA_8888)
{
LOGI("ANDROID_BITMAP_FORMAT_RGBA_8888");
}
else
{
LOGI("ANDROID_BITMAP_FORMAT %d",info.format);
}
Mat bgra(info.height, info.width, CV_8UC4, pixels);
Mat image;
//bgra.copyTo(image);
// fix pixel order RGBA -> BGRA
cvtColor(bgra, image, COLOR_RGBA2BGRA);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(3);
// save image
if (!imwrite(filePath, image, compression_params))
{
LOGE("saveImage() -> Error saving image!");
error = -7;
}
// release locked pixels
AndroidBitmap_unlockPixels(pEnv, jbitmap);
}
Saving image using native int pixel array methods:
JNIEXPORT void JNICALL Java_com_vblast_smasher_Smasher_saveImageRaw
(JNIEnv *pEnv, jobject obj, jstring jFilePath, jintArray jbgra, jint options, jint compression)
{
jint* _bgra = pEnv->GetIntArrayElements(jbgra, 0);
const char *filePath = pEnv->GetStringUTFChars(jFilePath, 0);
if (NULL != filePath)
{
Mat image;
Mat bgra(outputHeight, outputWidth, CV_8UC4, (unsigned char *)_bgra);
bgra.copyTo(image);
if (0 == options)
{
// replace existing cache value
mpCache->insert(filePath, image);
}
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(compression);
// save image
if (!imwrite(filePath, image))
{
LOGE("saveImage() -> Error saving image!");
}
}
pEnv->ReleaseIntArrayElements(jbgra, _bgra, 0);
pEnv->ReleaseStringUTFChars(jFilePath, filePath);
}
Update 05/25/12:
After a little more research I'm finding out that this issue does not happen if I get the int array of pixels from the bitmap and pass that directly to the JNI as opposed to what I do currently which is pass the entire Bitmap to the JNI layer then get the pixels and use cvtColor to convert pixels properly. Am I using the right pixel conversion?
There are two ways representing alpha in an RGBA pixel, premultiplied or not. With premultiplication, the R, G, and B values are multiplied by the percentage of alpha: color = (color * alpha) / 255. This simplifies a lot of blending calculations and is often used internally in imaging libraries. Before saving out to a format that doesn't use premultiplied alpha, such as PNG, the color values must be "unmultiplied": color = (255 * color) / alpha. If it is not, the colors will look too dark; the more transparent the color, the darker it will be. That looks like the effect you're seeing here.
There is nothing called as transparent image in opencv. The foreground and the background images are mixed appropriately to give the illusion of transparency. Check this to see how its done.

Resources