Write picture from canvas as jpeg - dart

I want to write picture from a canvas as jpeg.
I have image as png format bytes:
PictureRecorder recorder = new PictureRecorder();
Canvas c = new Canvas(recorder);
c.drawPaint(paint); // etc
Picture p = recorder.endRecording();
ByteData pngBytes =
await p.toImage(100, 100).toByteData(format: ImageByteFormat.png);
Is it possible to convert png to jpeg or use some other function instead toImage()?

Related

How to convert a picture in pure black and white in Rust

I want to convert a picture in pure black and white(e.g. no grayscale) using Image crate, the result should be a picture with 0 and 255 RGB values.
Following the docs i've wrote the following:
let img = image::open("photo.jpg").unwrap(); // Load picture
let gray_img = img.grayscale(); // Convert it
// Access a random pixel value
let px = gray_img.get_pixel(0,0);
println!("{:?}", pixel.data); // Print RGB array
The problem here is that, whatever pixel i print, it gives me grayscale value.
So, is there a function to convert an image in pure black and white? Something like Pillow's convert function for Python?
Here's how you can first build a grayscale image then dither it to a Black and White one:
use image::{self, imageops::*};
let img = image::open("cat.jpeg").unwrap();
let mut img = img.grayscale();
let mut img = img.as_mut_luma8().unwrap();
dither(&mut img, &BiLevel);
img.save("cat.png").unwrap(); // this step is optional but convenient for testing
You should of course properly handle errors instead of just doing unwrap.

OpenCV and Windows 10 transparent image

Suppose we have the following color:
const Scalar TRANSPARENT2 = Scalar(255, 0, 255,0);
which is magenta but fully transparent: alpha = 0 (to be fully opaque is 255).
Now I made the following test based on:
http://blogs.msdn.com/b/lucian/archive/2015/12/04/opencv-first-version-up-on-nuget.aspx
WriteableBitmap^ Grabcut::TestTransparent()
{
Mat res(400,400, CV_8UC4);
res.setTo(TRANSPARENT2);
WriteableBitmap^ wbmp = ref new WriteableBitmap(res.cols, res.rows);
IBuffer^ buffer = wbmp->PixelBuffer;
unsigned char* dstPixels;
ComPtr<IBufferByteAccess> pBufferByteAccess;
ComPtr<IInspectable> pBuffer((IInspectable*)buffer);
pBuffer.As(&pBufferByteAccess);
pBufferByteAccess->Buffer(&dstPixels);
memcpy(dstPixels, res.data, res.step.buf[1] * res.cols * res.rows);
return wbmp;
}
The issue I have is that the image created is not fully transparent, it has a bit of alpha:
I understand there is a fila in the memcpy data, but I am not really sure about how to solve this. any idea to get it to alpha 0?
more details
To see I saving the image could then read and test if it works, I saw that the imwrite contains an snippet about transparency like in the image, but well imwrite is not implemented yet. But the transparency method is not working neither.
Any light with this snippet?
Thanks.
Finally I did the conversion in the C# code, first avoid calling CreateAlphaMat.
Then what I did is use a BitmapEncoder to convert data:
WriteableBitmap wb = new WriteableBitmap(bitmap.PixelWidth, bitmap.PixelHeight);
using (IRandomAccessStream stream = new InMemoryRandomAccessStream())
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[pixelStream.Length];
await pixelStream.ReadAsync(pixels, 0, pixels.Length);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied,
(uint)bitmap.PixelWidth, (uint)bitmap.PixelHeight, 96.0, 96.0, pixels);
await encoder.FlushAsync();
wb.SetSource(stream);
}
this.MainImage.Source = wb;
where bitmap is the WriteableBitmap from the OpenCV result. And now the image is fully transparent.
NOTE: Do not use MemoryStream and then .AsRandomAccessStream because it won't FlushAsync

Dynamically generated image is bigger than original image in asp.net mvc

In my asp.net mvc application I am uploading image and resizing same to 800x600 pixel size, converting it to PNG format and then saving it to disk. I am using following code
public void ResizeImg(HttpPostedFileBase UploadImg)
{
if (UploadImg != null)
{
Stream s = UploadImg.InputStream;
Image UploadedImg = Image.FromStream(s);
int Width = UploadedImg.Width;
int Height = UploadedImg.Height;
int ResizeWidth = 800, ResizeHeight = 600;
using (var newImage = new Bitmap(ResizeWidth, ResizeHeight))
using (var graphics = Graphics.FromImage(newImage))
using (var stream = new MemoryStream())
{
/* Resizing */
graphics.SmoothingMode = SmoothingMode.AntiAlias;
graphics.InterpolationMode = InterpolationMode.Default;
graphics.PixelOffsetMode = PixelOffsetMode.Default;
graphics.DrawImage(UploadedImg, new Rectangle(0, 0, ResizeWidth, ResizeHeight));
newImage.Save(stream, ImageFormat.Png);
/* Saving resized image */
FileStream fileStream = File.Create(HttpContext.Current.Server.MapPath("~/Images/GeneratedBarcode/testing.png"), (int)stream.Length);
byte[] bytesInStream = new byte[stream.Length];
stream.Read(bytesInStream, 0, bytesInStream.Length);
fileStream.Write(bytesInStream, 0, bytesInStream.Length);
}
}
}
code is working properly. But issue is that original image is 1024x768 jpg image which is 858 KB and after resize it is 800x600 png image with 1.16MB
Why after resizing to smaller size and converting to png image size become larger than origianl.
I found solution for that issue. Instead of saving image in PNG formate, I am saving it in JPEG formate like this:
//newImage.Save(stream, ImageFormat.Png);
newImage.Save(stream, ImageFormat.Jpeg);
I don't know why but it works for me.

Alpha Value issue in transparency

I'm creating an application of watermarking using opencv, I'm not able to set background of image as transparent.
I'm using this code Scalar colorScalar = new Scalar(255,255,255,0);
Can any body help me how to make background transparent. I'm using PNG format image.
targetMat = new Mat(targetSize, scaledImage.type(), colorScalar);
Mat waterSubmat = targetMat.submat((int)offsetY,scaledImage.height(), (int)offsetX, scaledImage.width());
scaledImage.copyTo(waterSubmat);
center = new org.opencv.core.Point(pivotX, pivotY);
Mat rotImage = Imgproc.getRotationMatrix2D(center, degreevaluechange, 1);
Mat resultMat = new Mat(2,3, CvType.CV_32FC1);
colorScalar = new Scalar(255,255,255,0);
Imgproc.warpAffine(targetMat, resultMat, rotImage, targetSize, Imgproc.INTER_AREA, Imgproc.BORDER_CONSTANT, colorScalar);
scaledImage = resultMat.clone();
If you want to load your PNG image with the alpha channel and therefore load your image with transparenty, you have to use this code:
imread("image.png",-1)
You can find more informations in the opencv documentation here:
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=imread#imread
As you see in the documentation provided by Maximus. You need to create a 4 channel Mat:
Mat* targetMat = new Mat(targetSize, CV_8UC4, colorScalar);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
try {
imwrite("alpha.png", targetMat, compression_params);
}
catch (runtime_error& ex) {
fprintf(stderr, "Exception converting image to PNG format: %s\n", ex.what());
return 1;
}
Then add the parameters and write. (This code is from the documentation)

How to scale an image to half size through an array of bytes?

I found many examples about how to scale an image in Windows Forms, but at this case I'm using an array of bytes in a Windows Store application. This is the snippet code what I'm using.
// Now that you have the raw bytes, create a Image Decoder
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);
// Get the first frame from the decoder because we are picking an image
BitmapFrame frame = await decoder.GetFrameAsync(0);
// Convert the frame into pixels
PixelDataProvider pixelProvider = await frame.GetPixelDataAsync();
// Convert pixels into byte array
srcPixels = pixelProvider.DetachPixelData();
wid = (int)frame.PixelWidth;
hgt =(int)frame.PixelHeight;
// Create an in memory WriteableBitmap of the same size
bitmap = new WriteableBitmap(wid, hgt);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
// Push the pixels from the original file into the in-memory bitmap
pixelStream.Write(srcPixels, 0, (int)srcPixels.Length);
bitmap.Invalidate();
At this case, it is just creating a copy of the stream. I don't know how to manipulate the byte array to reduce it to the half width and height.
If you look at the MSDN documentation for GetPixelDataAsync, you can see that it has an overload that allows you to specify a BitmapTransform to be applied during the operation.
So you can do this in your example code, something like this:
// decode a frame (as you do now)
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);
BitmapFrame frame = await decoder.GetFrameAsync(0);
// calculate required scaled size
uint newWidth = frame.PixelWidth / 2;
uint newHeight = frame.PixelHeight / 2;
// convert (and resize) the frame into pixels
PixelDataProvider pixelProvider =
await frame.GetPixelDataAsync(
BitmapPixelFormat.Rgba8,
BitmapAlphaMode.Straight,
new BitmapTransform() { ScaledWidth = newWidth, ScaledHeight = newHeight},
ExifOrientationMode.RespectExifOrientation,
ColorManagementMode.DoNotColorManage);
Now, you can call DetachPixelData as in your original code, but this will give you the resized image instead of the full sized image.
srcPixels = pixelProvider.DetachPixelData();
// create an in memory WriteableBitmap of the scaled size
bitmap = new WriteableBitmap(newWidth, newHeight);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
// push the pixels from the original file into the in-memory bitmap
pixelStream.Write(srcPixels, 0, (int)srcPixels.Length);
bitmap.Invalidate();

Resources