How to save Texture2D as image in gallery? - encode

Let's say I have a 2D texture. How can I save it as a PNG or JPG image in unity. Please write the most common version, if possible with comments.

Texture2D has a EncodeToPNG and EncodeToJPG method.

Related

Converting Image in OpenCV to same color space as Photoshop

I am trying to de-noise a .tif image using OpenCV's bilateralFilter() method which is working.
However when I use cv.imwrite() to save the image, it is being saved in the BGR color format from openCV.
I want to save the image in the same color space as Adobe Photoshop.
I have attached the same image saved in photoshop and also the image saved from opencv for comparison.
OpenCV Image and Photoshop Image Comparison
The color between the two is very different in comparison. How can I save the file after the openCV changes to look like a typical .tiff file being opened in photoshop.
Any Help Is Appreciated!
I tried to convert the OpenCV image to RGB after I performed the bilateralFilter, but the openCV RGB is also different from what I see from opening the original image in Adobe Photoshop.

Can I create big texture from other small textures in webgl?

I have loaded textures in memory and I want to draw them one draw call. I can put all texture coords to buffer but how I create one texture from small texture parts ? is that possible ?
or I must download images and combine them then I create textrute from combined big picture ?
In general combining images into a texture atlas is something you'd do off line either manually like in an image editing program or using custom or specialized tools. That's the most common and recommended way.
If you have to do it at runtime for some reason then the easiest way to combine images into a single texture is to first load all your images, then use the canvas 2D api to draw them into a 2D canvas, then use that canvas as a source for texImage2D in WebGL. The only issue with using a 2D canvas is if you need data other than images because 2D canvas only supports pre-multiplied alpha.
Otherwise doing it in WebGL is just a matter of rendering your smaller textures into a larger texture. Rendering to a texture requires creating the texture, attaching to a framebuffer, and then rendering like you would anything else. See this for rendering to a texture and this for rendering any part of an image to any place in the canvas or another texture.

AVCaptureVideoPreviewLayer add overlays and capture photo in iOS

The initial idea was to start a camera stream via AVCaptureSession, find faces in that raw CMSampleBuffer and then add some images as layers on AVCaptureVideoPreviewLayer and then take a screenshot.
After completing that, found out later that the UIGraphicsGetImageFromCurrentImageContext won't work with AVCaptureVideoPreviewLayer, so taking screenshot would not solve my purpose here.
So I used Metal and MTKView instead to perform some live rendering and the results are good with the combination of CoreImage Filters and Metal. I already know how to detect faces and alter that part of the face using inbuilt CoreImage filters but I can't find a suitable method to add an image on to another image.
How can I blend two images with respect to positioning in the background image? I have CIImage to work with.
You can load your overlay into a CIImage, then use transformed(by matrix: CGAffineTransform) to move it to the face position, and finally use composited(over dest: CIImage) to blend it over the CIImage from the video buffer.
You probably have to put in some work to transfer between the different coordinate spaces.
There are also a lot of more complex compositing filters available. Check out the filters in the CICategoryCompositeOperation category.

Can CIImage be convert to vector graphic?

I tried many ways but failed. Such as PDF or SVG, all are nil.
CoreImage cannot export as vector graphics on iOS?
Any guidance is appreciate.
No. Core Image is a framework for working with raster images.
A CIImage can definitely be converted to a vector graphic but probably not out of the box with core image. There are libraries such as opencv and others that can process and vectorize a bitmap image. Your results may vary based on your source image and intended output.

Read RGBA Image OpenCV

I am stuck with reading, processing and displaying sample.png image which
contains RGB and an additional Alpha layer.
I have manually removed background in this image and only foreground appears in
windows image slideshow propgram. I couldnt find any useful information
anywhr... when i read it from opencv usng functions imread or cvloadimage it
creates a white background by itself... i have read documentation of highgui
which states that these functions only deal wth RGB not RGBA...any help or idea
will be helpful...
Thanks
Saleh...
AFAIK only current solution is to load alpha channel as separate image and then join it together. You can use cvtColor() to add alpha channel to Mat with image and e.g. mixChannels() to mix it together with loaded aplha image.
You can use cv::imread() with IMREAD_UNCHANGED option, to read the data to a cv::Mat structure. If you still need an IplImage to work with, it is possible to convert from cv::Mat to IplImage without losing the alpha channel.

Resources