Save frames of background subtraction capture - save

I am doing a background subtraction capture demo recently but I met with difficulties. I have already get the pixel of silhouette extraction and I intend to draw it into a buffer through createGraphics(). I set the new background is 100% transparent so that I could only get the foreground extraction. Then I use saveFrame() function in order to get png file of each frame. However, it doesn't work as I expected. I intend to get a series of png of the silhouette extraction
with 100% transparent background but now I only get the general png of frames from the camera feed. Is there anyone could help me to see what's the problem with this code? Thanks a lot in advance. Any help will be appreciated.
import processing.video.*;
Capture video;
PGraphics pg;
PImage backgroundImage;
float threshold = 30;
void setup() {
size(320, 240);
video = new Capture(this, width, height);
video.start();
backgroundImage = createImage(video.width, video.height, RGB);
pg = createGraphics(320, 240);
}
void captureEvent(Capture video) {
video.read();
}
void draw() {
pg.beginDraw();
loadPixels();
video.loadPixels();
backgroundImage.loadPixels();
image(video, 0, 0);
for (int x = 0; x < video.width; x++) {
for (int y = 0; y < video.height; y++) {
int loc = x + y * video.width;
color fgColor = video.pixels[loc];
color bgColor = backgroundImage.pixels[loc];
float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor);
float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);
if (diff > threshold) {
pixels[loc] = fgColor;
} else {
pixels[loc] = color(0, 0);
}
}}
pg.updatePixels();
pg.endDraw();
saveFrame("line-######.png");
}
void mousePressed() {
backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
backgroundImage.updatePixels();
}

Re:
Then I use saveFrame() function in order to get png file of each frame. However, it doesn't work as I expected. I intend to get a series of png of the silhouette extraction with 100% transparent background but now I only get the general png of frames from the camera feed.
This won't work, because saveFrame() saves the canvas, and the canvas doesn't support transparency. For example, from the reference:
It is not possible to use the transparency alpha parameter with background colors on the main drawing surface. It can only be used along with a PGraphics object and createGraphics(). https://processing.org/reference/background_.html
If you want to dump a frame with transparency you need to use .save() to dump it directly from a PImage / PGraphics.
https://processing.org/reference/PImage_save_.html
If you need to clear your PImage / PGraphics and reuse it each frame, either use pg.clear() or pg.background(0,0,0,0) (set all pixels to transparent black).

Related

How can I randomize Image size

Thank you to the people who previously helped me, I have managed to work a lot on my generative business cards assignment.
I want to randomly resize 9 images in processing but can't seem to find a good example on the internet on how to do it. The size of the images is 850x550 which is also the background size.
Does anyone know a good and easy to follow tutorial? or could give me an example?
The Processing's documentation on the image() method covers this.
I still wrote you some skeleton code to demonstrate:
PImage img;
int w, h;
float scaleModifier = 1;
void setup() {
size(800, 600);
img = loadImage("bean.jpeg");
w = img.width;
h = img.height;
}
void draw() {
background(0);
image(img, 0, 0, w, h); // here is the important line
}
// Every click will resize the image
void mouseClicked() {
scaleModifier += 0.1;
if (scaleModifier > 1) {
scaleModifier = 0.1;
}
w = (int)(img.width * scaleModifier);
h = (int)(img.height * scaleModifier);
}
What's important to know is the following:
image() has 2 signatures:
image(img, a, b)
image(img, a, b, c, d)
Where the following applies:
img => the PImage for your image
a => x coordinate where to draw the image
b => y coordinate where to draw the image
c => the image's width (if it's different from the image's width, this implies a resize)
d => the image's height (also implies a resize if it's different from the "real" height)
Have fun!
say you have stored an image in a PImage object, image
you can generate two random integers for the img_width and img_height of the image and then resize() the image using resize() method
int img_width = foor(random(min_value, max_value));
int img_height = floor(random(min_value, max_value));
image.resize(img_width, img_height); //this simple code resizes the image to any dimension
or if you want to keep the same aspect ratio, then you can use this approach
//first set either of width or height to a random value
int img_width = floor(random(min_value, max_value));
//then proportionally calculate the other dimension of the image
float ratio = (float) image.width/image.height;
int img_height = floor(img_width/ratio);
image.resize(img_width, img_height);
You can check this out YouTube playlist for some tutorials of image processing.

Extract Color from image using OpenCV

Predefined: My A4 sheet will always be of white color.
I need to detect A4 sheet from image. I am able to detect rectangles, now the problem is I am getting multiple rectangles from my image. So I extracted the images from the detected rectangle points.
Now I want to match image color with white color.
Using below method to extract image from contours detected :
- (cv::Mat) getPaperAreaFromImage: (std::vector<cv::Point>) square, cv::Mat image
{
// declare used vars
int paperWidth = 210; // in mm, because scale factor is taken into account
int paperHeight = 297; // in mm, because scale factor is taken into account
cv::Point2f imageVertices[4];
float distanceP1P2;
float distanceP1P3;
BOOL isLandscape = true;
int scaleFactor;
cv::Mat paperImage;
cv::Mat paperImageCorrected;
cv::Point2f paperVertices[4];
// sort square corners for further operations
square = sortSquarePointsClockwise( square );
// rearrange to get proper order for getPerspectiveTransform()
imageVertices[0] = square[0];
imageVertices[1] = square[1];
imageVertices[2] = square[3];
imageVertices[3] = square[2];
// get distance between corner points for further operations
distanceP1P2 = distanceBetweenPoints( imageVertices[0], imageVertices[1] );
distanceP1P3 = distanceBetweenPoints( imageVertices[0], imageVertices[2] );
// calc paper, paperVertices; take orientation into account
if ( distanceP1P2 > distanceP1P3 ) {
scaleFactor = ceil( lroundf(distanceP1P2/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperWidth*scaleFactor, paperHeight*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperHeight*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperWidth*scaleFactor );
paperVertices[3] = cv::Point( paperHeight*scaleFactor, paperWidth*scaleFactor );
}
else {
isLandscape = false;
scaleFactor = ceil( lroundf(distanceP1P3/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperHeight*scaleFactor, paperWidth*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperWidth*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperHeight*scaleFactor );
paperVertices[3] = cv::Point( paperWidth*scaleFactor, paperHeight*scaleFactor );
}
cv::Mat warpMatrix = getPerspectiveTransform( imageVertices, paperVertices );
cv::warpPerspective(image, paperImage, warpMatrix, paperImage.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT );
if (true) {
cv::Rect rect = boundingRect(cv::Mat(square));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 5, 8, 0);
UIImage *object = [self UIImageFromCVMat:paperImage];
}
// we want portrait output
if ( isLandscape ) {
cv::transpose(paperImage, paperImageCorrected);
cv::flip(paperImageCorrected, paperImageCorrected, 1);
return paperImageCorrected;
}
return paperImage;
}
EDITED: I used below method to get the color from image. But now my problem after converting my original image to cv::mat, when I am cropping there is already transparent grey color over my image. So always I am getting the same color.
Is there any direct way to get original color from cv::mat image?
- (UIColor *)averageColor: (UIImage *) image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rgba[4];
CGContextRef context = CGBitmapContextCreate(rgba, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), image.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
if(rgba[3] > 0) {
CGFloat alpha = ((CGFloat)rgba[3])/255.0;
CGFloat multiplier = alpha/255.0;
return [UIColor colorWithRed:((CGFloat)rgba[0])*multiplier
green:((CGFloat)rgba[1])*multiplier
blue:((CGFloat)rgba[2])*multiplier
alpha:alpha];
}
else {
return [UIColor colorWithRed:((CGFloat)rgba[0])/255.0
green:((CGFloat)rgba[1])/255.0
blue:((CGFloat)rgba[2])/255.0
alpha:((CGFloat)rgba[3])/255.0];
}
}
EDIT 2 :
Input Image
Getting this output
Need to detect only A4 sheet of white color.
I just resolved it using Google Vision api.
My objective was to calculate the cracks for builder purpose from image so in my case User will be using A4 sheet as reference on the image where crack is, and I will capture the A4 sheet and calculate the size taken by each pixel. Then build will tap on two points in the crack, and I will calculate the distance.
In google vision I used document text detection api and printed my app name on A4 sheet fully covered vertically or horizontally. And google vision api detect that text and gives me the coordinate.

How to create VHS effect on iOS using GPUImage or another library

I am trying to make a VHS effect for an iOS app, just like in this video:
https://www.youtube.com/watch?v=8ipML-T5yDk
I want to realize this effect with the less effect possible to generate less CPU charge.
Basically what I need is to crank up the color levels to create a "chromatic aberration", change Sharpen parameters, and add some gaussian blur + add some noise.
I am using GPUImage. For the Sharpen and Gaussian blur, easy to apply.
I am having two problems:
1) For the "chromatic aberration", the way they do it usually is to duplicate three times the video, and put Red to 0 on one video, blue to 0 on another one, and green to 0 on the last one, and blend them together (just like in the tutorial). But doing this in an iPhone would be too CPU consuming.
Any idea how to achieve the same effect withtout having to duplicate the video and blend it =
2) I also want to add some noise but do not know which GPUImage effect to use. Any idea on this one too ?
Thanks a lot,
Sébastian
(I'm not an iOS developer but I hope this can help someone.)
I wrote a VHS filter on Windows, this is what I did:
Crop the video frame to 4:3 aspect ratio and lower the resolution to 360*270.
Lower color saturation, and apply a color matrix to reduce green color to 93% (so the video will look purple).
Apply a convolve matrix to sharpen the video frame directionally. This is the kernel I used:
0 -0.5 0 0
-0.5 2.9 0 -0.5
0 -0.5 0 0
Blend a real blank VHS footage to your video for the noise (search for "VHS overlay" on YouTube).
Video: Before After
Screenshot: Before After
The CPU and GPU consumption is ok. I apply this filter to real time camera preview on my old windows phone (with Snapdragon 808), and it works fine.
Code (C#, using Win2D library for GPU acceleration, implementing Windows.Media.Effects.IBasicVideoEffect interface):
public void ProcessFrame(ProcessVideoFrameContext context) //This method is called each frame
{
int outputWidth = 360; //Output Resolution
int outputHeight = 270;
IDirect3DSurface inputSurface = context.InputFrame.Direct3DSurface;
IDirect3DSurface outputSurface = context.OutputFrame.Direct3DSurface;
using (CanvasBitmap inputFrame = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, inputSurface)) //The video frame to be processed
using (CanvasRenderTarget outputFrame = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, outputSurface)) //The video frame after processing
using (CanvasDrawingSession outputFrameDrawingSession = outputFrame.CreateDrawingSession())
using (CanvasRenderTarget croppedFrame = new CanvasRenderTarget(canvasDevice, outputWidth, outputHeight, outputFrame.Dpi))
using (CanvasDrawingSession croppedFrameDrawingSession = croppedFrame.CreateDrawingSession())
using (CanvasBitmap overlay = Task.Run(async () => { return await CanvasBitmap.LoadAsync(canvasDevice, overlayFrames[new Random().Next(0, overlayFrames.Count - 1)]); }).Result) //"overlayFrames" is a list containing video frames from https://youtu.be/SHhRFU2Jyfs, here we just randomly pick one frame for blend
{
double inputWidth = inputFrame.Size.Width;
double inputHeight = inputFrame.Size.Height;
Rect ractangle;
//Crop the inputFrame to 360*270, save it to "croppedFrame"
if (3 * inputWidth > 4 * inputHeight)
{
double x = (inputWidth - inputHeight / 3 * 4) / 2;
ractangle = new Rect(x, 0, inputWidth - 2 * x, inputHeight);
}
else
{
double y = (inputHeight - inputWidth / 4 * 3) / 2;
ractangle = new Rect(0, y, inputWidth, inputHeight - 2 * y);
}
croppedFrameDrawingSession.DrawImage(inputFrame, new Rect(0, 0, outputWidth, outputHeight), ractangle, 1, CanvasImageInterpolation.HighQualityCubic);
//Apply a bunch of effects (mentioned in step 2,3,4) to "croppedFrame"
BlendEffect vhsEffect = new BlendEffect
{
Background = new ConvolveMatrixEffect
{
Source = new ColorMatrixEffect
{
Source = new SaturationEffect
{
Source = croppedFrame,
Saturation = 0.4f
},
ColorMatrix = new Matrix5x4
{
M11 = 1f,
M22 = 0.93f,
M33 = 1f,
M44 = 1f
}
},
KernelHeight = 3,
KernelWidth = 4,
KernelMatrix = new float[]
{
0, -0.5f, 0, 0,
-0.5f, 2.9f, 0, -0.5f,
0, -0.5f, 0, 0,
}
},
Foreground = overlay,
Mode = BlendEffectMode.Screen
};
//And draw the result to "outputFrame"
outputFrameDrawingSession.DrawImage(vhsEffect, ractangle, new Rect(0, 0, outputWidth, outputHeight));
}
}

Detecting HeartBeat Using WebCam?

I am trying to create an application which can detect heartbeat using your computer webcam. I am working on the code since 2 weeks and developed this code and here I got so far
How does it works? Illustrated below ...
Detecting face using opencv
Getting image of forehead
Applying filter to convert it into grayscale image [you can skip it]
Finding the average intensity of green pixle per frame
Saving the averages into an Array
Applying FFT (I have used minim library)Extract heart beat from FFT spectrum (Here, I need some help)
Here, I need help for extracting heartbeat from FFT spectrum. Can anyone help me. Here, is the similar application developed in python but I am not able to undersand this code so I am developing same in the proessing. Can anyone help me to undersatnd the part of this python code where it is extracting the heartbeat.
//---------import required ilbrary -----------
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------------------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------------------------
ArrayList<Float> poop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//---------------------------------------------------
void setup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw() {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video -----------
float gavg = 0;
for (int i = 0; i < faces.length; i++) {
noFill();
stroke(#FFB700); // yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
stroke(#0070FF); //blue rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image -------------------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels();
int numPixels = img.width*img.height;
for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
final color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//--------------------------------------------------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0;i<poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i < fft.specSize(); i++)
{
// println( " Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft.specSize(), 0, width);
line( x, height, x, height - fft.getBand(i)*100);
// text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
}
}
else {
println(sample.length + " " + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
The FFT is applied in a window with 128 samples.
int bufferSize = 128;
During the draw method the samples are stored in a array until fill the buffer for the FFT to be applied. Then after that the buffer is keep full. To insert a new sample the oldest is removed. gavg is the average gray channel color.
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
Coping poop to sample
sample = new float[poop.size()];
for (int i=0;i < poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
Now is possible to apply the FFT to sample Array
fft.forward(sample, 0);
In the code is only show the spectrum result. The heartbeat frequency must be calculated.
For each band in fft you have to find the maximum and that position is the frequency of heartbeat.
for(int i = 0; i < fft.specSize(); i++)
{ // draw the line for frequency band i, scaling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
}
Then get the bandwidth to know the frequency.
float bw = fft.getBandWidth();
Adjusting frequency.
heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
After you get samples size 128 that is bufferSize value or greater than that, forward the fft with the samples array and then get the peak value of the spectrum which would be our heartBeatRate
Following Papers explains the same :
Measuring Heart Rate from Video - Isabel Bush - Stanford - link (Page 4 paragraphs below Figure 2 explain this.)
Real Time Heart Rate Monitoring From Facial RGB Color Video Using Webcam - H. Rahman, M.U. Ahmed, S. Begum, P. Funk - link (Page 4)
After looking at your question , I thought let me get my hands onto this and I tried making a repository for this.
Well, having some issues if someone can have a look at it.
Thank you David Clifte for this answer it helped a lot.

Number of non-zero pixels in a cv::RotatedRect

as the title says i'm trying to find the number of non-zero pixels in a certain area of a cv::Mat, namely within a RotatedRect.
For a regular Rect one could simply use countNonZeroPixels on a ROI. However ROIs can only be regular (non rotated) rectangles.
Another idea was to draw the rotated rectangle and use that as a mask. However openCV neither supports the drawing of rotated rectangles nor does countNonZeroPixels accept a mask.
Does anyone have a solution for how to elegantly solve this ?
Thank you !
Ok, so here's my first take at it.
The idea is to rotate the image reverse to the rectangle's rotation and than apply a roi on the straightened rectangle.
This will break if the rotated rectangle is not completely within the image
You can probably speed this up by applying another roi before rotation to avoid having to rotate the whole image...
#include <highgui.h>
#include <cv.h>
// From http://stackoverflow.com/questions/2289690/opencv-how-to-rotate-iplimage
cv::Mat rotateImage(const cv::Mat& source, cv::Point2f center, double angle)
{
cv::Mat rot_mat = cv::getRotationMatrix2D(center, angle, 1.0);
cv::Mat dst;
cv::warpAffine(source, dst, rot_mat, source.size());
return dst;
}
int main()
{
cv::namedWindow("test1");
// Our rotated rect
int x = 300;
int y = 350;
int w = 200;
int h = 50;
float angle = 47;
cv::RotatedRect rect = cv::RotatedRect(cv::Point2f(x,y), cv::Size2f(w,h), angle);
// An empty image
cv::Mat img = cv::Mat(cv::Size(640, 480), CV_8UC3);
// Draw rotated rect as an ellipse to get some visual feedback
cv::ellipse(img, rect, cv::Scalar(255,0,0), -1);
// Rotate the image by rect.angle * -1
cv::Mat rotimg = rotateImage(img, rect.center, -1 * rect.angle);
// Set roi to the now unrotated rectangle
cv::Rect roi;
roi.x = rect.center.x - (rect.size.width / 2);
roi.y = rect.center.y - (rect.size.height / 2);
roi.width = rect.size.width;
roi.height = rect.size.height;
cv::imshow("test1", rotimg(roi));
cv::waitKey(0);
}
A totally different approach might be to rotate your image (in opposite direction), and still use the rectangular ROI in combination with countNonZeroPixels. The only problem will be that you have to rotate your image around a pivot of the center of the ROI...
To make it clearer, see attached example:
To avoid rotation in similar task I iterate over each pixel in RotatedRect with such function:
double filling(Mat& img, RotatedRect& rect){
double non_zero = 0;
double total = 0;
Point2f rect_points[4];
rect.points( rect_points );
for(Point2f i=rect_points[0];norm(i-rect_points[1])>1;i+=(rect_points[1]-i)/norm((rect_points[1]-i))){
Point2f destination = i+rect_points[2]-rect_points[1];
for(Point2f j=i;norm(j-destination)>1;j+=(destination-j)/norm((destination-j))){
if(img.at<uchar>(j) != 0){
non_zero+=1;
}
total+=1;
}
}
return non_zero/total;
}
It's looks like usual iteration over rectangle, but on each step we add unit 1px vector to current point in direction to destination.
This loop NOT iterate over all points and skip a few pixels, but it was okay for my task.
UPD: It much better to use LineIterator to iterate:
Point2f rect_points[4];
rect.points(rect_points);
Point2f x_start = rect_points[0];
Point2f x_end = rect_points[1];
Point2f y_direction = rect_points[3] - rect_points[0];
LineIterator x = LineIterator(frame, x_start, x_end, 4);
for(int i = 0; i < x.count; ++i, ++x){
LineIterator y = LineIterator(frame, x.pos(), x.pos() + y_direction, 4);
for(int j=0; j < y_count; j++, ++y){
Vec4b pixel = frame.at<Vec4b>(y.pos);
/* YOUR CODE HERE */
}
}

Resources