Colour & Size Issue After STL image Import in Processing - image-processing

I wrote a program in Processing, to import a STL file and then rotate in 3D according to requirement.
But I am facing a problem that the import image is very colorful and small in size. it looks very different from original.
Can you please help me to resolve this problem.
Coding and images are given below
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.processing.*;
TriangleMesh mesh;
ToxiclibsSupport gfx;
void setup() {
size(displayWidth, displayHeight,P3D);
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
gfx=new ToxiclibsSupport(this);
}
void draw() {
background(51);
translate(width/2,height/2,0);
rotateX(mouseY*0.01);
rotateY(mouseX*0.01);
gfx.mesh(mesh,false);
}

I also had same confusion like you but after scrolling reference page of Processing.org
I found couple of commands which can effected on STL Object in positive way.
some command are:
directionalLight
nostroke
scale
Increases or decreases the size of a shape by expanding and contracting vertices
Your above issue is directly linked with these commands so just edit it according to your requirement
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.processing.*;
TriangleMesh mesh;
ToxiclibsSupport gfx;
void setup() {
size(displayHeight, displayWidth,P3D);
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
gfx=new ToxiclibsSupport(this);
}
void draw() {
background(51);
translate(width/2,height/2,0);
rotateX(mouseY*0.01);
rotateY(mouseX*0.01);
directionalLight(192, 168, 128,0, -1000, -0.5);
directionalLight(255, 64, 0, 0.5f, -0.5f, -0.1f);
noStroke();
scale(3);
gfx.mesh(mesh,false);
}
image after program running

Related

how to apply opencv filters to ARGB images in Processing?

In a part of a project, after displaying backgroundimage, I should hide part of it by creating a mask (named mask1 in my code) and making first 3000 pixels of mask1colorless. Then glare the result by utilizing blur() method of OpenCV library. The problem is that it seems that OpenCv library is ignoring opacity(alpha channel) of pixels in mask1. As a result, it is impossible to see the backgroundimage behind the blured image which has been created by OpenCV library. here is my code:
import gab.opencv.*;
OpenCV opencv;
int[] userMap;
PImage backgroundimage,mask1;
void setup() {
backgroundimage=loadImage("test.jpg");
mask1=createImage(640,480,ARGB);
opencv = new OpenCV(this,640,480);
size(640,480);
}
void draw() {
image(backgroundimage,0,0);
mask1.loadPixels();
for (int index=0; index<640*480; index++) {
if (index<30000) {
mask1.pixels[index]=color(0,0,0,0);
}
else{
mask1.pixels[index]=color(255,255,255);
}
}
mask1.updatePixels();
opencv.loadImage(mask1);
opencv.blur(8);
image(opencv.getSnapshot(), 0, 0);
}
Is there any other solution to glare mask1??

Need help - Monogame 2d water reflection

Ok so i am following this tutorial Water Reflection XNA and when i adjust code with monogame i cant get final result.
So here is my LoadContent code:
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
texture = Content.Load<Texture2D>("test");
effect = Content.Load<Effect>("LinearFade");
effect.Parameters["Visibility"].SetValue(0.7f);
}
and my Draw code:
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
//effect.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(texture, new Vector2(texturePos.X, texturePos.Y + texture.Height), null, Color.White * 0.5f, 0f, Vector2.Zero, 1, SpriteEffects.FlipVertically, 0f);
spriteBatch.End();
spriteBatch.Begin();
spriteBatch.Draw(texture, texturePos, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
and finally my .fx file: LinearFade
So the problem starts when i apply effect. My texture just disappear and if i comment effect part in Draw method i get mirror image with fade (messing with alpha "Color.White * 0.5f") but without fade effect like he have on tutorial from middle of picture to the bottom of picture. I still dont have much experience in monogame and shader but i am learning.
If any1 know how to fix this or how to make it like on tutorial above it would be nice. Btw sry for bad english not my main language.
Ok no need for help i got it after 2 day of thinking i know the answer. The thing is that you need default vertex shader input and output and then shader is working. So if any1 have problem with shaders in monogame first see if you have default vertex input and output. Will put solution code so if some1 doing same tutorial or similer thing so they know what is the problem.
Solution: Working Effect.fx

How to store image from cam?

I am just started with processing.js.
The goal of my program is adept image filter(opencv) to video frame.
So I thought (However I found out it does not working in this way :<) :
get video steam from Capture object which in processing.video package.
Store current Image(I hope it can store as PImage Object).
adept OpenCV image Filter
call image method with filtered PImage Object.
I found out how to get video stream from cam, but do not know how to store this.
import processing.video.*;
import gab.opencv.*;
Capture cap;
OpenCV opencv;
public void setup(){
//size(800, 600);
size(640, 480);
colorMode(RGB, 255, 255, 255, 100);
cap = new Capture(this, width, height);
opencv = new OpenCV(this, cap);
cap.start();
background(0);
}
public void draw(){
if(cap.available()){
//return void
cap.read();
}
image(cap, 0, 0);
}
this code get video stream and show what it gets. However, I can not store single frame since Capture.read() returns 'void'.
After Store current frame I would like to transform PImage s with OpenCV like :
PImage gray = opencv.getSnapshot();
opencv.threshold(80);
thresh = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.blur(12);
blur = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.adaptiveThreshold(591, 1);
adaptive = opencv.getSnapshot();
Is there any decent way to store and transform current frame? (I think my way - this means show frame after save current image and transform - uses lots of resources depend on frame rate)
Thanks for answer :D
not sure what you want to do, I'm sure you solved it already, but this could be useful for someone anyway...
It seems you can just write the Capture object's name directly and it returns a PImage:
cap = new Capture(this, width, height);
//Code starting and reading capture in here
PImage snapshot = cap;
//Then you can do whatever you wanted to do with the PImage
snapshot.save("snapshot.jpg");
//Actually this seems to work fine too
cap.save("snapshot.jpg");
Use opencv.loadImage(cap). For example:
if (cap.available() == true) {
cap.read();
}
opencv.loadImage(cap);
opencv.blur(15);
image(opencv.getSnapshot(), 0, 0);
Hope this helps!

Constructor OpenCV is undefined

Trying to learn Face Recognition via Processing2 using OpenCV. This code is copied right out of Jan Vantomme's book. However, I am using an updated version of OpenCV from that listed in book. Error received is: The constructor(sketch_ORIGopenCV) is undefined. Does anyone know if OpenCV is not upward compatable, or what causes this error ?
import gab.opencv.*; //original code in Vantomme's book shows "import hypermedia.video.*;"
// I assume thats updated to gab.opencv.* but should still work.
OpenCV opencv;
void setup()
{
size( 640, 480 );
opencv = new OpenCV( this ); // <-----THIS IS THE ERROR LINE*********
opencv.capture( width, height );
}
void draw()
{
opencv.read();
opencv.flip( OpenCV.flip_HORIZONTAL );
image( opencv.image(), 0, 0 );
}

Removing long horizontal/vertical lines from edge image using OpenCV

How can I use standard image processing filters (from OpenCV) to remove long horizontal and vertical lines from an image?
The images are B&W so removing means simply painting black.
Illustration:
I'm currently doing it in Python, iterating over pixel rows and cols and detecting ranges of consecutive pixels, removing those that are longer than N pixels. However, it's really slow in comparison to the OpenCV library, and if there's a way of accomplishing the same with OpenCV functions, that'll likely be orders of magnitude faster.
I imagine this can be done by convolution using a kernel that's a row of pixels (for horizontal lines), but I'm having a hard time figuring the exact operation that would do the trick.
if your lines are truly horizontal/vertical, try this
import cv2
import numpy as np
img = cv2.imread('c:/data/test.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
linek = np.zeros((11,11),dtype=np.uint8)
linek[5,...]=1
x=cv2.morphologyEx(gray, cv2.MORPH_OPEN, linek ,iterations=1)
gray-=x
cv2.imshow('gray',gray)
cv2.waitKey(0)
result
You can refer OpenCV Morphological Transformations documentation for more details.
How long is "long". Long, as in, lines that run the length of the entire image, or just longer than n pixels?
If the latter, then you could just use an n+1 X n+1 median or mode filter, and set the corner coefficients to zero, and you'd get the desired effect.
If you're referring to just lines that run the width of the entire image, just use the memcmp() function against a row of data, and compare it to a pre-allocated array of zeros which is the same length as a row. If they are the same, you know you have a blank line that spans the horizontal length of the image, and that row can be deleted.
This will be MUCH faster than the element-wise comparisons you are currently using, and is very well explained here:
Why is memcpy() and memmove() faster than pointer increments?
If you want to repeat the same operation for vertical lines, just transpose the image, and repeat the operation.
I know this is more of a system-optimization level approach than an openCV filter like you requested, but it gets the job done fast and safely. You can speed up the calculation even more if you manage to force the image and your empty array to be 32-bit aligned in memory.
To remove Horizontal Lines from an image you can use an edge detection algorithm to detect edges and then use Hough's Transform in OpenCV to detect lines and color them white:
import cv2
import numpy as np
img = cv2.imread(img,0)
laplacian = cv2.Laplacian(img,cv2.CV_8UC1) # Laplacian Edge Detection
minLineLength = 900
maxLineGap = 100
lines = cv2.HoughLinesP(laplacian,1,np.pi/180,100,minLineLength,maxLineGap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img,(x1,y1),(x2,y2),(255,255,255),1)
cv2.imwrite('Written_Back_Results.jpg',img)
This is for javacv.
package com.test11;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class GetVerticalOrHorizonalLines {
static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
public static void main(String[] args) {
//Canny process before HoughLine Recognition
Mat source = Imgcodecs.imread("src//data//bill.jpg");
Mat gray = new Mat(source.rows(),source.cols(),CvType.CV_8UC1);
Imgproc.cvtColor(source, gray, Imgproc.COLOR_BGR2GRAY);
Mat binary = new Mat();
Imgproc.adaptiveThreshold(gray, binary, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2);
Imgcodecs.imwrite("src//data//binary.jpg", binary);
Mat horizontal = binary.clone();
int horizontalsize = horizontal.cols() / 30;
int verticalsize = horizontal.rows() / 30;
Mat horizontal_element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(horizontalsize,1));
//Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3,3));
Imgcodecs.imwrite("src//data//horizontal_element.jpg", horizontal_element);
Mat Linek = Mat.zeros(source.size(), CvType.CV_8UC1);
//x = Imgproc.morphologyEx(gray, dst, op, kernel, anchor, iterations);
Imgproc.morphologyEx(gray, Linek,Imgproc.MORPH_BLACKHAT, horizontal_element);
Imgcodecs.imwrite("src//data//bill_RECT_Blackhat.jpg", Linek);
Mat vertical_element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(1,verticalsize));
Imgcodecs.imwrite("src//data//vertical_element.jpg", vertical_element);
Mat Linek2 = Mat.zeros(source.size(), CvType.CV_8UC1);
//x = Imgproc.morphologyEx(gray, dst, op, kernel, anchor, iterations);
Imgproc.morphologyEx(gray, Linek2,Imgproc.MORPH_CLOSE, vertical_element);
Imgcodecs.imwrite("src//data//bill_RECT_Blackhat2.jpg", Linek2);
}
}
Another one.
package com.test12;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class ImageSubstrate {
static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
public static void main(String[] args) {
Mat source = Imgcodecs.imread("src//data//bill.jpg");
Mat image_h = Mat.zeros(source.size(), CvType.CV_8UC1);
Mat image_v = Mat.zeros(source.size(), CvType.CV_8UC1);
Mat output = new Mat();
Core.bitwise_not(source, output);
Mat output_result = new Mat();
Mat kernel_h = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(20, 1));
Imgproc.morphologyEx(output, image_h, Imgproc.MORPH_OPEN, kernel_h);
Imgcodecs.imwrite("src//data//output.jpg", output);
Core.subtract(output, image_h, output_result);
Imgcodecs.imwrite("src//data//output_result.jpg", output_result);
Mat kernel_v = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(1, 20));
Imgproc.morphologyEx(output_result, image_v, Imgproc.MORPH_OPEN, kernel_v);
Mat output_result2 = new Mat();
Core.subtract(output_result, image_v, output_result2);
Imgcodecs.imwrite("src//data//output_result2.jpg", output_result2);
}
}

Resources