p5.js image in a buffer - buffer

I am trying to load image "image.png" and place it in the preaviously created createGraphics() buffer.
However when I load the buffer in draw() the image is not there.
I am wondering if an image can be loaded into a buffer, and if so how ?
var buffer;
var image;
function setup() {
createCanvas(800, 800);
image = loadImage("image.png");
buffer = createGraphics(800, 800);
buffer.background(255, 255, 0);
buffer.image(image, 0, 0)
}
function draw() {
image(buffer, 0, 0);
}
any help would be greatly appreciated.

It is not good to call your variable image as p5.js already has an image() function (which you use).
Use preload() to load the image. This ensures that the image is actually loaded before you try to load it in your buffer within setup(). See here for more info on this.
A working example is given here. It uses the following p5 code.
var buffer;
var img;
function preload() {
img = loadImage("p5im.png");
}
function setup() {
createCanvas(600, 600);
buffer = createGraphics(400, 400);
buffer.background(255, 255, 0);
buffer.image(img, 0, 0)
}
function draw() {
image(buffer, 0, 0);
}

Related

Openlayers rotation broken when using precompose to clip a layer

After updating Openlayers to >4.0 map rotation is completely broken when using a precompose hook:
function precompose(event) {
var context = event.context;
context.beginPath();
context.moveTo(0, 0);
context.lineTo(100, 0);
context.lineTo(100, 100);
context.lineTo(0, 100);
context.lineTo(0, 0);
context.closePath();
context.fillStyle = "rgba(0, 5, 25, 0.75)";
context.fill();
}
It is important to save the canvas context, for more information see MDN CanvasRenderingContext2d
function precompose(event) {
var context = event.context;
context.save(); // be sure to save the context before anything is done
// ...
}
Also, use context.restore() in the postcompose hook.

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

How to store image from cam?

I am just started with processing.js.
The goal of my program is adept image filter(opencv) to video frame.
So I thought (However I found out it does not working in this way :<) :
get video steam from Capture object which in processing.video package.
Store current Image(I hope it can store as PImage Object).
adept OpenCV image Filter
call image method with filtered PImage Object.
I found out how to get video stream from cam, but do not know how to store this.
import processing.video.*;
import gab.opencv.*;
Capture cap;
OpenCV opencv;
public void setup(){
//size(800, 600);
size(640, 480);
colorMode(RGB, 255, 255, 255, 100);
cap = new Capture(this, width, height);
opencv = new OpenCV(this, cap);
cap.start();
background(0);
}
public void draw(){
if(cap.available()){
//return void
cap.read();
}
image(cap, 0, 0);
}
this code get video stream and show what it gets. However, I can not store single frame since Capture.read() returns 'void'.
After Store current frame I would like to transform PImage s with OpenCV like :
PImage gray = opencv.getSnapshot();
opencv.threshold(80);
thresh = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.blur(12);
blur = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.adaptiveThreshold(591, 1);
adaptive = opencv.getSnapshot();
Is there any decent way to store and transform current frame? (I think my way - this means show frame after save current image and transform - uses lots of resources depend on frame rate)
Thanks for answer :D
not sure what you want to do, I'm sure you solved it already, but this could be useful for someone anyway...
It seems you can just write the Capture object's name directly and it returns a PImage:
cap = new Capture(this, width, height);
//Code starting and reading capture in here
PImage snapshot = cap;
//Then you can do whatever you wanted to do with the PImage
snapshot.save("snapshot.jpg");
//Actually this seems to work fine too
cap.save("snapshot.jpg");
Use opencv.loadImage(cap). For example:
if (cap.available() == true) {
cap.read();
}
opencv.loadImage(cap);
opencv.blur(15);
image(opencv.getSnapshot(), 0, 0);
Hope this helps!

Paint to a bitmap using DirectX under Windows Metro

I am trying to use DirectX to create and paint into a bitmap, and then later use that bitmap to paint to the screen. I have the following test code which I have pieced together from a number of sources. I seem to be able to create the bitmap without errors, and I try and fill it with red. However, when I use the bitmap to paint to the screen I just get a black rectangle. Can anyone tell me what I might be doing wrong? Note that this is running as a Windows 8 Metro app, if that makes a difference.
Note that the formatter seems to be removing my angle brackets and I don't how stop it from doing that. The 'device' variable is of class ID3D11Device, the 'context' variable is ID3D11DeviceContext, the d2dContext variable is ID2D1DeviceContext, the dgixDevice variable is IDXGIDevice, and the 'd2dDevice' is ID2D1Device.
void PaintTestBitmap(GRectF& rect)
{
HRESULT hr;
UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
ComPtr<ID2D1DeviceContext> d2dContext;
hr = D3D11CreateDevice( NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
creationFlags,
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION,
&device,
NULL,
&context);
if (SUCCEEDED(hr))
{
ComPtr<IDXGIDevice> dxgiDevice;
device.As(&dxgiDevice);
// create D2D device context
ComPtr<ID2D1Device> d2dDevice;
hr = D2D1CreateDevice(dxgiDevice.Get(), NULL, &d2dDevice);
if (SUCCEEDED(hr))
{
hr = d2dDevice->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &d2dContext);
if (SUCCEEDED(hr))
{
ID2D1Bitmap1 *pBitmap;
D2D1_SIZE_U size = { rect.m_width, rect.m_height };
D2D1_BITMAP_PROPERTIES1 properties = {{ DXGI_FORMAT_R8G8B8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED }, 96, 96, D2D1_BITMAP_OPTIONS_TARGET, 0 };
hr = d2dContext->CreateBitmap(size, (const void *) 0, 0, &properties, &pBitmap);
if (SUCCEEDED(hr))
{
d2dContext->SetTarget(pBitmap);
d2dContext->BeginDraw();
d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Red)); // Clear the bitmap to Red
d2dContext->EndDraw();
// If I now use pBitmap to paint to the screen, I get a black rectange
// rather than the red one I was expecting
}
}
}
}
}
Note that I am fairly new to both DirectX and Metro style apps.
Thanks
what you're probably forgetting is that you dont draw your bitmap on your context.
what you can try is
hr = d2dContext->CreateBitmap(size, (const void *) 0, 0, &properties, &pBitmap);
if (SUCCEEDED(hr))
{
ID2D1Image *pImage = nullptr;
d2dContext->GetTarget(&image);
d2dContext->SetTarget(pBitmap);
d2dContext->BeginDraw();
d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Red)); // Clear the bitmap to Red
d2dContext->SetTarget(pImage);
d2dContext->DrawBitmap(pBitmap);
d2dContext->EndDraw();
}
what you do is store your rendertarget and then draw your bitmap and at the end you set your context to the right render target and let it draw your bitmap on it

Freeing loaded texture on WebGL

The Textured Box demo on http://www.khronos.org/webgl/wiki/Demo_Repository has the following code snippets:
function loadImageTexture(ctx, url)
{
var texture = ctx.createTexture(); // allocate texture
texture.image = new Image();
texture.image.onload = function()
{ doLoadImageTexture(ctx, texture.image, texture) }
texture.image.src = url;
return texture;
}
function doLoadImageTexture(ctx, image, texture)
{
ctx.bindTexture(ctx.TEXTURE_2D, texture);
ctx.texImage2D(ctx.TEXTURE_2D, 0, ctx.RGBA, ctx.RGBA, ctx.UNSIGNED_BYTE, image); // loaded the image
...
}
...
var spiritTexture = loadImageTexture(gl, "resources/spirit.jpg");
...
How do you free the allocated/loaded texture in order to avoid (graphics) memory leak?
Will the following code free both the loaded/allocated texture and image?
spiritTexture = null;
Thanks in advance for your help.
NOTE: Cross post on http://www.khronos.org/message_boards/viewtopic.php?f=43&t=3367 on December 23, 2010 but no answer so far.
ctx.deleteTexture(spiritTexture);
That should free the texture on the gpu.
Regarding the image, you should just modify loadImageTexture so that it doesn't store image inside texture. The reference to image is not needed outside loadImageTexture and you can let it go out of scope naturally when doLoadImageTexture completes.
i.e. make it something like:
function loadImageTexture(ctx, url)
{
var texture = ctx.createTexture(); // allocate texture
var image = new Image();
image.onload = function() { doLoadImageTexture(ctx, image, texture) }
image.src = url;
return texture;
}

Resources