Advanced denoise Image using Opencv - opencv

I am trying denoise this image to get better edges
I've tried bilaterFilter, GaussianBlur, morphological close and several threshold but every time I get an image like:
and when I do the HoughLinesP with dilatation of edges is really bad result.
Can some one help me to improve this? Is there a some way to take out those noise?
Frist try: using GaussianBlur, in this case, I must use equalizeHist or I cant get edges even if I use a really low threshold
public class TesteNormal {
static {
System.loadLibrary("opencv_java310");
}
public static void main(String args[]) {
Mat imgGrayscale = new Mat();
Mat imgBlurred = new Mat();
Mat imgCanny = new Mat();
Mat image = Imgcodecs.imread("c:\\cordova\\imagens\\teste.jpg", 1);
int imageWidth = image.width();
int imageHeight = image.height();
Imgproc.cvtColor(image, imgGrayscale, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(imgGrayscale, imgGrayscale);
Imgproc.GaussianBlur(imgGrayscale, imgBlurred, new Size(5, 5), 1.8);
Photo.fastNlMeansDenoising(imgBlurred, imgBlurred);
Imshow.show(imgBlurred);
Mat imgKernel = Imgproc.getStructuringElement(Imgproc.MORPH_CROSS, new Size(3, 3));
Imgproc.Canny(imgBlurred, imgCanny, 0, 80);
Imshow.show(imgCanny);
Imgproc.dilate(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 2);
Imgproc.erode(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 1);
Imshow.show(imgCanny);
Mat lines = new Mat();
int threshold = 100;
int minLineSize = imageWidth < imageHeight ? imageWidth / 3 : imageHeight / 3;
int lineGap = 5;
Imgproc.HoughLinesP(imgCanny, lines, 1, Math.PI / 360, threshold, minLineSize, lineGap);
System.out.println(lines.rows());
for(int x = 0; x < lines.rows(); x++) {
double[] vec = lines.get(x, 0);
double x1 = vec[0], y1 = vec[1], x2 = vec[2], y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Imgproc.line(image, start, end, new Scalar(255, 0, 0), 1);
}
Imshow.show(image);
}
}
Second try: using bilateral filter:
public class TesteNormal {
static {
System.loadLibrary("opencv_java310");
}
public static void main(String args[]) {
Mat imgBlurred = new Mat();
Mat imgCanny = new Mat();
Mat image = Imgcodecs.imread("c:\\cordova\\imagens\\teste.jpg", 1);
int imageWidth = image.width();
int imageHeight = image.height();
Imgproc.bilateralFilter(image, imgBlurred, 10, 35, 35);
Imshow.show(imgBlurred);
Mat imgKernel = Imgproc.getStructuringElement(Imgproc.MORPH_CROSS, new Size(3, 3));
Imgproc.Canny(imgBlurred, imgCanny, 0, 120);
Imshow.show(imgCanny);
Imgproc.dilate(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 2);
Imgproc.erode(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 1);
Imshow.show(imgCanny);
Mat lines = new Mat();
int threshold = 100;
int minLineSize = imageWidth < imageHeight ? imageWidth / 3 : imageHeight / 3;
int lineGap = 5;
Imgproc.HoughLinesP(imgCanny, lines, 1, Math.PI / 360, threshold, minLineSize, lineGap);
System.out.println(lines.rows());
for(int x = 0; x < lines.rows(); x++) {
double[] vec = lines.get(x, 0);
double x1 = vec[0], y1 = vec[1], x2 = vec[2], y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Imgproc.line(image, start, end, new Scalar(255, 0, 0), 1);
}
Imshow.show(image);
}
}
As suggested, I am trying use opencv contrib, using StructuredEdgeDetection. I am testing using a fixed image.
Frist I compile opencv with contrib
Segund I wrote the C++ code:
JNIEXPORT jobject JNICALL Java_vi_pdfscanner_main_ScannerEngine_getRandomFlorest(JNIEnv *env, jobject thiz) {
Mat mbgra = imread("/storage/emulated/0/Resp/coco.jpg", 1);
Mat3f fsrc;
mbgra.convertTo(fsrc, CV_32F, 1.0 / 255.0); // when I run those convertTo, I got all back image, that way I got no edges.
const String model = "/storage/emulated/0/Resp/model.yml.gz";
Ptr<cv::ximgproc::StructuredEdgeDetection> pDollar = cv::ximgproc::createStructuredEdgeDetection(model);
Mat edges;
__android_log_print(ANDROID_LOG_VERBOSE, APPNAME, "chamando edges");
pDollar->detectEdges(fsrc, edges);
imwrite( "/storage/emulated/0/Resp/edges.jpg", edges);
jclass java_bitmap_class = (jclass)env->FindClass("android/graphics/Bitmap");
jmethodID mid = env->GetMethodID(java_bitmap_class, "getConfig", "()Landroid/graphics/Bitmap$Config;");
jobject bitmap_config = env->CallObjectMethod(bitmap, mid);
jobject _bitmap = mat_to_bitmap(env,edges,false,bitmap_config);
return _bitmap;
}
and I wrote this java wapper
public class ScannerEngine {
private static ScannerEngine ourInstance = new ScannerEngine();
public static ScannerEngine getInstance() {
return ourInstance;
}
private ScannerEngine() {
}
public native Bitmap getRandomFlorest(Bitmap bitmap);
static {
System.loadLibrary("opencv_java3");
System.loadLibrary("Scanner");
}
}
this point is, when I run those lines
Mat mbgra = imread("/storage/emulated/0/Resp/coco.jpg", 1); //image is ok
Mat3f fsrc;
mbgra.convertTo(fsrc, CV_32F, 1.0 / 255.0); //now image got all back, someone have some ideia why?
Thanks very much!

The Result about are strong, like this
Original Image:
http://prntscr.com/cyd8qi
Edges Image:
http://prntscr.com/cyd9ax
Its run on android 4.4 (api lvl 19) in a really old device.
That's all,
Thanks you very much

Related

Scrolling Image using PixelWriter / Reader

I'm trying to create a scrolling image that wraps around a canvas to follow its own tail. I've been trying to use PixelWriters and Readers to save off the vertical pixel lines that are scrolling off the screen to the West, and append these to a new image which, should grow on the RHS (East) of the screen.
It scrolls, but that's all that's happening. I don't understand how to calculate the scanlines, so apologies for this part.
Any help appreciated.
package controller;
import javafx.animation.AnimationTimer;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.image.*;
import javafx.scene.layout.*;
import util.GraphicsUtils;
import java.io.File;
import java.nio.ByteBuffer;
import java.nio.file.Path;
import java.nio.file.Paths;
class ImageContainer extends HBox {
int w, h;
int translatedAmount = 0;
Image image;
Canvas canvas;
long startNanoTime = System.nanoTime();
WritableImage eastImage = null;
public ImageContainer() {
setVisible(true);
load();
w = (int) image.getWidth();
h = (int) image.getHeight();
canvas = new Canvas(w, h);
int edgeX = (int) canvas.getWidth(); //You can set this a little west for visibility sake...whilst debugging
getChildren().addAll(canvas);
GraphicsContext gc = canvas.getGraphicsContext2D();
canvas.setVisible(true);
gc.drawImage(image, 0, 0, w, h);
setPrefSize(w, h);
eastImage = new WritableImage(translatedAmount+1, h); //create a new eastImage
new AnimationTimer() {
public void handle(long currentNanoTime) {
if (((System.nanoTime() - startNanoTime) / 1000000000.0) < 0.05) {
return;
} else {
startNanoTime = System.nanoTime();
}
translatedAmount++;
Image westLine = getSubImageRectangle(image, 1, 0, 1, h); //get a 1 pixel strip from west of main image
PixelReader westLinepixelReader = westLine.getPixelReader(); //create a pixel reader for this image
byte[] westLinePixelBuffer = new byte[1 * h * 4]; //create a buffer to store the pixels collected from the about to vanish westLine
westLinepixelReader.getPixels(0, 0, 1, h, PixelFormat.getByteBgraInstance(), westLinePixelBuffer, 0, 4); //collect the pixels from westLine strip
Image tempImg = eastImage; //save away the current east side image
byte[] tempBuffer = new byte[(int)tempImg.getWidth() * h * 4];
PixelReader tempImagePixelReader = tempImg.getPixelReader(); //create a pixel reader for our temp copy of the east side image
tempImagePixelReader.getPixels(0, 0, (int)tempImg.getWidth(), h, PixelFormat.getByteBgraInstance(), tempBuffer, 0, 4); //save the tempImage into the tempBuffer
eastImage = new WritableImage(translatedAmount+1, h); //create a new eastImage, but one size larger
PixelWriter eastImagePixelWriter = eastImage.getPixelWriter(); //create a pixel writer for this new east side image
eastImagePixelWriter.setPixels(1, 0, (int)tempImg.getWidth(), h, PixelFormat.getByteBgraInstance(), tempBuffer, 0, 4); //copy the temp image in at x=1
eastImagePixelWriter.setPixels((int)tempImg.getWidth(), 0, 1, h, PixelFormat.getByteBgraInstance(), westLinePixelBuffer, 0, 4); //copy the westLine at x=tempImg.width
image = getSubImageRectangle(image, 1, 0, (int) image.getWidth() - 1, h);
gc.drawImage(image, 0, 0); //draw main image
System.out.println(edgeX-eastImage.getWidth());
gc.drawImage(eastImage, edgeX-eastImage.getWidth(), 0); //add lost image lines
}
}.start();
}
public void load() {
Path imagePath = Paths.get("./src/main/resources/ribbonImages/clouds.png");
File f = imagePath.toFile();
assert f.exists();
image = new Image(f.toURI().toString());
}
public Image getSubImageRectangle(Image image, int x, int y, int w, int h) {
PixelReader pixelReader = image.getPixelReader();
WritableImage newImage = new WritableImage(pixelReader, x, y, w, h);
ImageView imageView = new ImageView();
imageView.setImage(newImage);
return newImage;
}
}
Why make this more difficult than necessary? Simply draw the image to the Canvas twice:
public static void drawImage(Canvas canvas, Image sourceImage, double offset, double wrapWidth) {
GraphicsContext gc = canvas.getGraphicsContext2D();
gc.clearRect(0, 0, canvas.getWidth(), canvas.getHeight());
// make |offset| < wrapWidth
offset %= wrapWidth;
if (offset < 0) {
// make sure positive offsets do not result in the previous version
// of the image not being drawn
offset += wrapWidth;
}
gc.drawImage(sourceImage, -offset, 0);
gc.drawImage(sourceImage, wrapWidth - offset, 0);
}
#Override
public void start(Stage primaryStage) {
Image image = new Image("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/402px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg");
Canvas canvas = new Canvas(image.getWidth(), image.getHeight());
primaryStage.setResizable(false);
Scene scene = new Scene(new Group(canvas));
DoubleProperty offset = new SimpleDoubleProperty();
offset.addListener((observable, oldOffset, newOffset) -> drawImage(canvas, image, newOffset.doubleValue(), canvas.getWidth()));
Timeline timeline = new Timeline(
new KeyFrame(Duration.ZERO, new KeyValue(offset, 0, Interpolator.LINEAR)),
new KeyFrame(Duration.seconds(10), new KeyValue(offset, image.getWidth()*2, Interpolator.LINEAR))
);
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
primaryStage.setScene(scene);
primaryStage.sizeToScene();
primaryStage.show();
}

JavaCV creating and drawing grayscale one channel histogram

i am new to this website, please let me know if i have made any mistake on my post.
I have some questions regarding calculating and drawing histogram in javacv. Below are the codes that i have written based on some information that i have searched:
There is this error that i get: OpenCV Error: One of arguments' values is out of range (index is out of range) in unknown function, file ......\src\opencv\modules\core\src\array.cpp, line 1691
private CvHistogram getHistogram(IplImage image) {//get histogram data, input has been converted to grayscale beforehand
IplImage[] hsvImage1 = {image};
//bins and value-range
int numberOfBins = 256;
float minRange = 0.0f;
float maxRange = 255.0f;
// Allocate histogram object
int dims = 1;
int[] sizes = new int[]{numberOfBins};
int histType = CV_HIST_ARRAY;
float[] minMax = new float[]{minRange, maxRange};
float[][] ranges = new float[][]{minMax};
CvHistogram hist = cvCreateHist(dims, sizes, histType, ranges, 1);
cvCalcHist(hsvImage1, hist, 0, null);
return hist;
}
private IplImage DrawHistogram(CvHistogram hist, IplImage image) {//draw histogram
int scaleX = 1;
int scaleY = 1;
int i;
float[] max_value = {0};
int[] int_value = {0};
cvGetMinMaxHistValue(hist, max_value, max_value, int_value, int_value);//get min and max value for histogram
IplImage imgHist = cvCreateImage(cvSize(256, image.height() ),IPL_DEPTH_8U,1);//create image to store histogram
cvZero(imgHist);
CvPoint pts = new CvPoint(5);
for (i = 0; i < 256; i++) {//draw the histogram
float value = opencv_legacy.cvQueryHistValue_1D(hist, i);
float nextValue = opencv_legacy.cvQueryHistValue_1D(hist, i + 1);
pts.position(0).x(i * scaleX).y(image.height() * scaleY);
pts.position(1).x(i * scaleX + scaleX).y(image.height() * scaleY);
pts.position(2).x(i * scaleX + scaleX).y((int)((image.height() - nextValue * image.height() /max_value[0]) * scaleY));
pts.position(3).x(i * scaleX).y((int)((image.height() - value * image.height() / max_value[0]) * scaleY));
pts.position(4).x(i * scaleX).y(image.height() * scaleY);
cvFillConvexPoly(imgHist, pts.position(0), 5, CvScalar.RED, CV_AA, 0);
}
return imgHist;
}
I have tried searching few links that i provided at the bottom, however, each of them are in different language, therefore i am not sure i have converted them to java correctly. To be honest there are few things i doubt, will be glad if any advice can be provided, such as:
float[] max_value = {0}; // i referred to the internet and it helps me to getby syntax error in cvGetMinMaxHistValue() , not sure if it will cause logic error
pts.position(3).x(i * scaleX).y((int)((image.height() - value * image.height() / max_value[0]) * scaleY)); // i put int to downcast it to the type the pts will recognise, and one more thing is max_value[0] is 0, wondering if it will cause logical error due to division
Links used:
//use this
public CvHistogram getHistogram(IplImage image) {//get histogram data, input has been converted to grayscale beforehand
IplImageArray hsvImage1 = splitChannels(image);
//bins and value-range
int numberOfBins = 256;
float minRange = 0.0f;
float maxRange = 255.0f;
// Allocate histogram object
int dims = 1;
int[] sizes = new int[]{numberOfBins};
int histType = CV_HIST_ARRAY;
float[] minMax = new float[]{minRange, maxRange};
float[][] ranges = new float[][]{minMax};
CvHistogram hist = cvCreateHist(dims, sizes, histType, ranges, 1);
cvCalcHist(hsvImage1, hist, 0, null);
return hist;
}
private IplImageArray splitChannels(IplImage hsvImage) {
CvSize size = hsvImage.cvSize();
int depth = hsvImage.depth();
IplImage channel0 = cvCreateImage(size, depth, 1);
IplImage channel1 = cvCreateImage(size, depth, 1);
IplImage channel2 = cvCreateImage(size, depth, 1);
cvSplit(hsvImage, channel0, channel1, channel2, null);
return new IplImageArray(channel0, channel1, channel2);
}
Your error is in this part:
for (i = 0; i < 256; i++) {//draw the histogram
float value = opencv_legacy.cvQueryHistValue_1D(hist, i);
float nextValue = opencv_legacy.cvQueryHistValue_1D(hist, i + 1);
You use i+1 and it causes the error out of range, you can use your for until 255 to correct it.
I hope I helped you. GL

OpenCV Fingertip detection

Good day. I'm new to OpenCV and right now, I'm trying to do fingertip detection using colour tracking and background subtraction methods. I got the colour tracking part working but I have no idea on how to subtract the background and leave only the fingertips.
Here is my code.
#include <opencv2/opencv.hpp>
#include <stdio.h>
#include <iostream>
using namespace std;
IplImage* GetThresholdedImage(IplImage* img, CvScalar& lowerBound, CvScalar& upperBound)
{
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
cvInRangeS(imgHSV, lowerBound, upperBound, imgThreshed);
cvReleaseImage(&imgHSV);
return imgThreshed;
}
int main()
{
int lineThickness = 2;
CvScalar lowerBound = cvScalar(20, 100, 100);
CvScalar upperBound = cvScalar(30, 255, 255);
int b,g,r;
lowerBound = cvScalar(0,58,89);
upperBound = cvScalar(25,173,229);
CvCapture* capture = 0;
capture = cvCaptureFromCAM(1);
if(!capture)
{
printf("Could not initialize capturing...\n");
return -1;
}
cvNamedWindow("video");
cvNamedWindow("thresh");
// This image holds the "scribble" data...
// the tracked positions of the object
IplImage* imgScribble = NULL;
while(true)
{
IplImage* frame = 0;
frame = cvQueryFrame(capture);
if(!frame)
break;
// If this is the first frame, we need to initialize it
if(imgScribble == NULL)
{
imgScribble = cvCreateImage(cvGetSize(frame), 8, 3);
}
// Holds the thresholded image (tracked color -> white, the rest -> black)
IplImage* imgThresh = GetThresholdedImage(frame,lowerBound,upperBound);
// Calculate the moments to estimate the position of the object
CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
cvMoments(imgThresh, moments, 1);
// The actual moment values
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
// Holding the last and current positions
static int posX = 0;
static int posY = 0;
int lastX = posX;
int lastY = posY;
posX = moment10/area;
posY = moment01/area;
cout << "position = " << posX << " " << posY << endl;
// We want to draw a line only if its a valid position
if(lastX>0 && lastY>0 && posX>0 && posY>0)
{
// Draw a yellow line from the previous point to the current point
cvLine(imgScribble, cvPoint(posX, posY), cvPoint(lastX, lastY), upperBound, lineThickness);
}
// Add the scribbling image and the frame...
cvAdd(frame, imgScribble, frame);
cvShowImage("thresh", imgThresh);
cvShowImage("video", frame);
int c = cvWaitKey(10);
if(c==27) //ESC key
{
break;
}
cvReleaseImage(&imgThresh);
delete moments;
}
cvReleaseCapture(&capture);
return 0;
}
I don t know if I understand you right but I think you should need to add the following:
cvErode(imgThreshed, imgThreshed, NULL, 1);
cvDilate(imgThreshed, imgThreshed, NULL, 1);
in GetThresholdedImage and get less noise ! but after all I think it would be better for you to use the cv::Mat object of opencv ;)
Try BGS library, I used it before and like it. You can get it here: http://code.google.com/p/bgslibrary/

Calculate white areas pixels in contours using opencv and Javacv

I have develop a program to detect motions using JavaCV. up to now i have completed cvFindContours of the processed image. source code is given below,
public class MotionDetect {
public static void main(String args[]) throws Exception, InterruptedException {
//FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(new File("D:/pool.avi"));
OpenCVFrameGrabber grabber = new OpenCVFrameGrabber("D:/2.avi");
final CanvasFrame canvas = new CanvasFrame("My Image");
final CanvasFrame canvas2 = new CanvasFrame("ROI");
canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
grabber.start();
IplImage frame = grabber.grab();
CvSize imgsize = cvGetSize(frame);
IplImage grayImage = cvCreateImage(imgsize, IPL_DEPTH_8U, 1);
IplImage ROIFrame = cvCreateImage(cvSize((265 - 72), (214 - 148)), IPL_DEPTH_8U, 1);
IplImage colorImage;
IplImage movingAvg = cvCreateImage(imgsize, IPL_DEPTH_32F, 3);
IplImage difference = null;
IplImage temp = null;
IplImage motionHistory = cvCreateImage(imgsize, IPL_DEPTH_8U, 3);
CvRect bndRect = cvRect(0, 0, 0, 0);
CvPoint pt1 = new CvPoint(), pt2 = new CvPoint();
CvFont font = null;
//Capture the movie frame by frame.
int prevX = 0;
int numPeople = 0;
char[] wow = new char[65];
int avgX = 0;
//Indicates whether this is the first time in the loop of frames.
boolean first = true;
//Indicates the contour which was closest to the left boundary before the object
//entered the region between the buildings.
int closestToLeft = 0;
//Same as above, but for the right.
int closestToRight = 320;
while (true) {
colorImage = grabber.grab();
if (colorImage != null) {
if (first) {
difference = cvCloneImage(colorImage);
temp = cvCloneImage(colorImage);
cvConvertScale(colorImage, movingAvg, 1.0, 0.0);
first = false;
//cvShowImage("My Window1", difference);
} //else, make a running average of the motion.
else {
cvRunningAvg(colorImage, movingAvg, 0.020, null);
}
//Convert the scale of the moving average.
cvConvertScale(movingAvg, temp, 1.0, 0.0);
//Minus the current frame from the moving average.
cvAbsDiff(colorImage, temp, difference);
//Convert the image to grayscale.
cvCvtColor(difference, grayImage, CV_RGB2GRAY);
//canvas.showImage(grayImage);
//Convert the image to black and white.
cvThreshold(grayImage, grayImage, 70, 255, CV_THRESH_BINARY);
//Dilate and erode to get people blobs
cvDilate(grayImage, grayImage, null, 18);
cvErode(grayImage, grayImage, null, 10);
canvas.showImage(grayImage);
ROIFrame = cvCloneImage(grayImage);
cvSetImageROI(ROIFrame, cvRect(72, 148, (265 - 72), (214 - 148)));
//cvOr(outFrame, tempFrame, outFrame);
cvShowImage("ROI Frame", ROIFrame);
cvRectangle(colorImage, /* the dest image */
cvPoint(72, 148), /* top left point */
cvPoint(265, 214), /* bottom right point */
cvScalar(255, 0, 0, 0), /* the color; blue */
1, 8, 0);
CvMemStorage storage = cvCreateMemStorage(0);
CvSeq contour = new CvSeq(null);
cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
}
//Show the frame.
cvShowImage("My Window", colorImage);
//Wait for the user to see it.
cvWaitKey(10);
}
//If this is the first time, initialize the images.
//Thread.sleep(50);
}
}
}
In this code ROIFrame, i need to calculate white contours area or pixel numbers??.. is there any way that i can proceed with
Use the function cvContourArea() Documentation here.
In your code, after your cvFindContours, do a loop with all of your contours like as:
CvSeq* curr_contour = contour;
while (curr_contour != NULL) {
area = fabs(cvContourArea(curr_contour,CV_WHOLE_SEQ, 0));
current_contour = current_contour->h_next;
}
Don't forget to store the area somewhere.

Converting IPLimage <> texture2d in unity3d using openCVSharp

Like the subject says. i am trying to implement openCVSharp surf in unity3d and kinda stuck in the converting part from iplimage to texture2d. Also considering that this converting proces should run at least at 25 fps. So any tips or suggestions are very helpfull!
Might be a bit late, I am working on the same thing now and here is my solution:
void IplImageToTexture2D (IplImage displayImg)
{
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
float b = (float)displayImg[i, j].Val0;
float g = (float)displayImg[i, j].Val1;
float r = (float)displayImg[i, j].Val2;
Color color = new Color(r / 255.0f, g / 255.0f, b / 255.0f);
videoTexture.SetPixel(j, height - i - 1, color);
}
}
videoTexture.Apply();
}
But it is a bit slow.
Still trying to improve the performance.
Texture2D tex = new Texture2D(640, 480);
CvMat img = new CvMat(640, 480, MatrixType.U8C3);
byte[] data = new byte[640 * 480 * 3];
Marshal.Copy(img.Data, data, 0, 640 * 480 * 3);
tex.LoadImage(data);
To improve performance use Unity3d's undocumented function LoadRawTextureData :
Texture2D IplImageToTexture2D(IplImage img)
{
Texture2D videoTexture = new Texture2D(imWidth, imHeight, TextureFormat.RGB24, false);
byte[] data = new byte[imWidth * imHeight * 3];
Marshal.Copy(img.ImageData, data, 0, imWidth * imHeight * 3);
videoTexture.LoadRawTextureData(data);
videoTexture.Apply();
return videoTexture;
}

Resources