How to convert int* (bgr image) from c++ to unity3d texture image? - opencv

I have a bgr image uchar format from opencv c++.
the function is like int* texture(int* data, int width, int height); the function processes the image in c++ end and returns the pointer to the data. How do I convert this data in Unity to texture. basically make this data available to be put as a texture. I dont want to write it to a file. Please help.
Code snippet (I am using dlls) :::
public static WebCamTexture webCamTexture;
private Color32[] data;
private int[] imageData;
private int[] imdat;
void Start () {
....
data = new Color32[webCamTexture.width * webCamTexture.height];
imageData = new int[data.Length * 3];
}
void Update()
{
webCamTexture.GetPixels32(data);
// Convert the Color32[] in int* and emit it in bgr format
for (int i = 0; i < data.Length; ++i)
{
imageData[i * 3] = (int)data[i].b;
imageData[i * 3 + 1] = (int)data[i].g;
imageData[i * 3 + 2] = (int)data[i].r;
}
//this is the function called from dll
imdat = texture(imageData, int width, int height);
}
And the DLL end looks like ::
char *tmp;
int* texture(int* imageData ,int width ,int height)
{
int n = w * h * 3;
tmp = new char[n];
//ImageData inverted here and then passed onto tmp 3 channels image
for (int i = 0; i < (w*3); ++i)
for (int j = 0; j < h; ++j)
tmp[i + j * (w*3)] = (char)imageData[i + (h - j - 1) * (w*3)];
return (int)tmp;
}

I'm not sure what format texture you have is, but if you can convert it into byte[] you can use Texture2D.LoadImage(byte[]) to turn in into working texture.

You should be able to achieve what to want with BitConverter.GetBytes() and Texture2D.LoadImage(). Make sure you take special note of the image format restrictions in the Unity manual page there.
Not sure how your binding between your C++land and C#land code but you should be able to do something a little like this:
/* iImportedTexture = [Your c++ function here]; */
byte[] bImportedTexture = BitConverter.GetBytes(iImportedTexture);
Texture2D importedTexture = Texture2D.LoadImage(bImportedTexture);

Related

How do I convert ByteArray from ImageMetaData() to Bitmap?

I have this code:
Frame frame = mSession.update();
Camera camera = frame.getCamera();
...
bytes=frame.getImageMetadata().getByteArray(0);
System.out.println("Byte Array "+frame.getImageMetadata().getByteArray(0));
Bitmap bmp = BitmapFactory.decodeByteArray(bytes,0,bytes.length);
System.out.println(bmp);
When I print Bitmap, I get a null object. I'm trying to get the image from the camera, that's the reason I'm trying to convert byteArray to Bitmap. If there's an alternative way, it would also be helpful.
Thank You.
The ImageMetaData describes the background image, but does not actually contain the image itself.
If you want to capture the background image as a Bitmap, you should look at the computervision sample which uses a FrameBufferObject to copy the image to a byte array.
I've tried something similar. It works. But I don't recommend anyone to try this way. It takes time because of nested loops.
CameraImageBuffer inputImage;
final Bitmap bmp = Bitmap.createBitmap(inputImage.width, inputImage.height, Bitmap.Config.ARGB_8888);
int width = inputImage.width;
int height = inputImage.height;
int frameSize = width*height;
// Write Bytebuffer to byte[]
byte[] imageBuffer= new byte[inputImage.buffer.remaining()];
inputImage.buffer.get(imageBuffer);
int[] rgba = new int[frameSize];
for (int i = 0; i < height; i++){
for (int j = 0; j < width; j++) {
int r =imageBuffer[(i * width + j)*4 + 0];
int g =imageBuffer[(i * width + j)*4 + 1];
int b =imageBuffer[(i * width + j)*4 + 2];
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
}
bmp.setPixels(rgba, 0, width , 0, 0, width, height);
Bytebuffer is converted to rgba buffer, and is written to Bitmap. CameraImageBuffer is the class provided in computervision sample app.
You may not able to get bitmap using image metadata. Use below approach.Use onDrawFrame override method of surface view render.
#Override public void onDrawFrame(GL10 gl) {
int w = 1080;
int h = 1080;
int b[] = new int[w * (0 + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap mBitmap = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
runOnUiThread(new Runnable() {
#Override public void run() {
image_test.setImageBitmap(resizedBitmap);
}
});
}

ImageMagick load image into RAM

I have a JPG picture on which I'd like to perform some operations in order to use pattern recognition. The picture is being rotated and also some filters like color inversion, greyscale,.. are applied
The program goes like this
for(i=0;i<360;i++){
rotate(pic,i);
foreach(filter as f){
f(pic);
recognize(pic);
}
}
In order to increase speed I'd like to have the source image loaded in RAM and then read from there. Is it possible?
You can write the image to mpr:, or clone the image instance to a new structure. Regardless if where the original source is in memory, you will still need to copy the data in the first for loop. Here's an example, in C, that holds a wand instance and clones each iteration.
#include <stdio.h>
#include <MagickWand/MagickWand.h>
void rotate(MagickWand * wand, double degree) {
PixelWand * pwand = NewPixelWand();
PixelSetColor(pwand, "white");
MagickRotateImage(wand, pwand, degree);
DestroyPixelWand(pwand);
}
void _f(MagickWand * wand, FilterTypes filter) {
double x,y;
x = y = 0.0;
MagickResampleImage(wand, x, y, filter);
}
void recognize(MagickWand * wand) {
// ???
}
int main(int argc, const char * argv[]) {
MagickWandGenesis();
MagickWand * wand, * copy_wand;
wand = NewMagickWand();
MagickReadImage(wand, "rose:");
for ( int i = 0; i < 360 ; i++ ) {
copy_wand = CloneMagickWand(wand);
for ( FilterTypes f = UndefinedFilter; f < SentinelFilter; f++ ) {
_f(copy_wand, f);
recognize(copy_wand);
}
}
MagickWandTerminus();
return 0;
}
The MPR writes to a specific page in memory, and can be identified by a user defined label.
MagickReadImage(wand, "rose:");
MagickWriteImage(wand, "mpr:original"); // Save image to "original" label
for ( int i = 0; i < 360 ; i++ ) {
copy_wand = NewMagickWand();
MagickReadImage(copy_wand, "mpr:original"); // Read image from "original" label
for ( FilterTypes f = UndefinedFilter; f < SentinelFilter; f++ ) {
_f(copy_wand, f);
recognize(copy_wand);
}
copy_wand = DestroyMagickWand(copy_wand);
}
The last option I can think of is to copy the image pixel-data into memory, and re-reference it with each iteration. This allows some performance improvements, and I'm thinking OpenMP, but you'll loose a lot of helper methods.
MagickReadImage(wand, "rose:");
size_t w = MagickGetImageWidth(wand);
size_t h = MagickGetImageHeight(wand);
size_t data_length = w * h * 4;
char * data = malloc(data_length);
MagickExportImagePixels(wand, 0, 0, w, h, "RGBA", CharPixel, (void *)data);
for ( int i = 0; i < 360; i++ ) {
long * copy_data = malloc(data_length);
memcpy(copy_data, data, data_length);
As you haven't specified a language or an operating system, I'll show you how to do that with Magick++ in C++ in a Linux/OSX environment:
#include <Magick++.h>
#include <iostream>
using namespace std;
using namespace Magick;
int main(int argc,char **argv)
{
InitializeMagick(*argv);
// Create an image object
Image image;
// Read a file into image object
image.read( "input.gif" );
// Crop the image to specified size (width, height, xOffset, yOffset)
image.crop( Geometry(100,100, 0, 0) );
// Repage the image to forget it was part of something bigger
image.repage();
// Write the image to a file
image.write( "result.gif" );
return 0;
}
Compile with:
g++ -o program program.cpp `Magick++-config --cppflags --cxxflags --ldflags --libs`
You will need an image called input.gif for it to read and that should be bigger than 100x100, so create one with:
convert -size 256x256 xc:gray +noise random input.gif

opencv cv::mat not returning the same result

int sizeOfChannel = (_width / 2) * (_height / 2);
double* channel_gr = new double[sizeOfChannel];
// filling the data into channel_gr....
cv::Mat my( _width/2, _height/2, CV_32F,channel_gr);
cv::Mat src(_width/2, _height/2, CV_32F);
for (int i = 0; i < (_width/2) * (_height/2); ++i)
{
src.at<float>(i) = channel_gr[i];
}
cv::imshow("src",src);
cv::imshow("my",my);
cv::waitKey(0);
I'm wondering why i'm not getting the same image in my and src imshow
update:
I have changed my array into double* still same result;
I think it is something to do with steps?
my image output
src image output
this one works for me:
int halfWidth = _width/2;
int halfHeight = _height/2;
int sizeOfChannel = halfHeight*halfWidth;
// ******************************* //
// you use CV_321FC1 later so it is single precision float
float* channel_gr = new float[sizeOfChannel];
// filling the data into channel_gr....
for(int i=0; i<sizeOfChannel; ++i) channel_gr[i] = i/(float)sizeOfChannel;
// ******************************* //
// changed row/col ordering, but this shouldnt be important
cv::Mat my( halfHeight , halfWidth , CV_32FC1,channel_gr);
cv::Mat src(halfHeight , halfWidth, CV_32FC1);
// ******************************* //
// changed from 1D indexing to 2D indexing
for(int y=0; y<src.rows; ++y)
for(int x=0; x<src.cols; ++x)
{
int arrayPos = y*halfWidth + x;
// you have a 2D mat so access it in 2D
src.at<float>(y,x) = channel_gr[arrayPos ];
}
cv::imshow("src",src);
cv::imshow("my",my);
// check for differences
cv::imshow("diff1 > 0",src-my > 0);
cv::imshow("diff2 > 0",my-src > 0);
cv::waitKey(0);
'my' is array of floats but you give it pointer to arrays of double. There no way it can get data from this array properly.
It seems that the constructor version that you are using is
Mat::Mat(int rows, int cols, int type, const Scalar& s)
This is from OpenCV docs. Seems like you are using float for src and assigning from channel_gr (declared as double). Isn't that some form of precision loss?

How to copy frame data from FFmpegSource2 (FFMS2) FFMS_Frame struct to OpenCV Mat?

I'm trying to read video file using FFmpegSource2 (FFMS2) and then process frames using OpenCV. What is the proper and efficient way to copy frame data from a FFMS_Frame struct returned by FFMS_GetFrame function to an OpenCV Mat?
Thank you very much in advance.
For now I am using the following procedure which works for the BGR color format.
Step 1. I use FFMS_SetOutputFormatV2 and FFMS_GetPixFmt( "bgra" ) to set output pixel format of FFMS to BGR.
int anPixFmts[2];
anPixFmts[0] = FFMS_GetPixFmt( "bgra" );
anPixFmts[1] = -1;
if( FFMS_SetOutputFormatV2( pstFfmsVidSrc, anPixFmts
, pstFfmsFrameProps->EncodedWidth, pstFfmsFrameProps->EncodedHeight
, FFMS_RESIZER_BICUBIC, &stFfmsErrInfo ) )
{
// handle error
}
Step 2. Read the desired frame using FFMS_GetFrame.
int nCurFrameNum = 5;
const FFMS_Frame *pstCurFfmsFrame = FFMS_GetFrame( pstFfmsVidSrc, nCurFrameNum, &stFfmsErrInfo );
Step 3. Copy data from pstCurFfmsFrame to OpenCV Mat, oMatA:
Mat oMatA;
oMatA = Mat::zeros( pstCurFfmsFrame->EncodedHeight, pstCurFfmsFrame->EncodedWidth, CV_8UC3 );
int nDi = 0;
for( int nRi = 0; nRi < oMatA.rows; nRi++ )
{
for( int nCi = 0; nCi < oMatA.cols; nCi++ )
{
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 0] = pstCurFfmsFrame->Data[0][nDi++]; // B
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 1] = pstCurFfmsFrame->Data[0][nDi++]; // G
oMatA.data[oMatA.step[0] * nRi + oMatA.step[1] * nCi + 2] = pstCurFfmsFrame->Data[0][nDi++]; // R
nDi++;
}
}
This code has to be changed to support other color formats (e.g Planar format like YV12 uses more than one plane of pstCurFfmsFrame->Data). May be someone could provide a full function to support most of the color formats and an efficient way to copy data from pstCurFfmsFrame->Data to OpenCV Mat.

Converting IPLimage <> texture2d in unity3d using openCVSharp

Like the subject says. i am trying to implement openCVSharp surf in unity3d and kinda stuck in the converting part from iplimage to texture2d. Also considering that this converting proces should run at least at 25 fps. So any tips or suggestions are very helpfull!
Might be a bit late, I am working on the same thing now and here is my solution:
void IplImageToTexture2D (IplImage displayImg)
{
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
float b = (float)displayImg[i, j].Val0;
float g = (float)displayImg[i, j].Val1;
float r = (float)displayImg[i, j].Val2;
Color color = new Color(r / 255.0f, g / 255.0f, b / 255.0f);
videoTexture.SetPixel(j, height - i - 1, color);
}
}
videoTexture.Apply();
}
But it is a bit slow.
Still trying to improve the performance.
Texture2D tex = new Texture2D(640, 480);
CvMat img = new CvMat(640, 480, MatrixType.U8C3);
byte[] data = new byte[640 * 480 * 3];
Marshal.Copy(img.Data, data, 0, 640 * 480 * 3);
tex.LoadImage(data);
To improve performance use Unity3d's undocumented function LoadRawTextureData :
Texture2D IplImageToTexture2D(IplImage img)
{
Texture2D videoTexture = new Texture2D(imWidth, imHeight, TextureFormat.RGB24, false);
byte[] data = new byte[imWidth * imHeight * 3];
Marshal.Copy(img.ImageData, data, 0, imWidth * imHeight * 3);
videoTexture.LoadRawTextureData(data);
videoTexture.Apply();
return videoTexture;
}

Resources