How to add camera module pipeline with A-Frame - 8thwall-xr

I am new to 8th wall, and frontend development. I'm developing a project with a bunch of image-target experiences developed in a-frame. And on top of that I need to scan QR code in the AR.
From what I see in the documentation, I can add camera pipeline module. But my question is that how can I start implementing that with a-frame component register? the start code is here.
AFRAME.registerComponent('qr-scan', {
schema: {
},
init: function () {
// Install a module which gets the camera feed as a UInt8Array.
XR8.addCameraPipelineModule(
XR8.CameraPixelArray.pipelineModule({luminance: true, width: 240, height: 320}))
// Install a module that draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
// Create our custom application logic for scanning and displaying QR codes.
XR8.addCameraPipelineModule({
name : 'qrscan',
onProcessCpu : ({onProcessGpuResult}) => {
// CameraPixelArray.pipelineModule() returned these in onProcessGpu.
const { pixels, rows, cols, rowBytes } = onProcesGpuResult.camerapixelarray
const { wasFound, url, corners } = findQrCode(pixels, rows, cols, rowBytes)
return { wasFound, url, corners }
},
onUpdate : ({onProcessCpuResult}) => {
// These were returned by this module ('qrscan') in onProcessCpu
const {wasFound, url, corners } = onProcessCpuResult.qrscan
if (wasFound) {
showUrlAndCorners(url, corners)
}
},
})
},
})

Related

Convert HTML to PDF Using OffScreenCanvas in React

OK so the question is self-explanatory.
Currently, I am doing the following
Using html2canvas npm package to convert my html to canvas.
Converting Canvas to image
canvas.toDataURL("image/png");
Using jsPDF to create a pdf using this image after some manipulations.
So basically all of this process is CPU intensive and leads to unresponsive page.
I'm trying to offload the process to a web worker and I'm unable to postMessage either HTML Node or Canvas Element to my worker thread.
Hence, trying to use OffScreenCanvas but I'm stuck about how to go on with this.
The first step can not be done in a Web-Worker. You need access to the DOM to be able to draw it, and Workers don't have access to the DOM.
The second step could be done on an OffscreenCanvas, html2canvas does accept a { canvas } parameter that you can also set to an OffscreenCanvas.
But once you get a context from an OffscreenCanvas you can't transfer it anymore, so you won't be able to pass that OffscreenCanvas to the Worker and you won't win anything from it since everything will still be done on the UI thread.
So the best we can do is to let html2canvas initialize an HTMLCanvasElement, draw on it and then convert it to an image in a Blob. Blobs can traverse realms without any cost of copying and the toBlob() method can have its compression part be done asynchronously.
The third step can be done in a Worker since this PR.
I don't know react so you will have to rewrite it, but here is a bare JS implementation:
script.js
const dpr = window.devicePixelRatio;
const worker = new Worker( "worker.js" );
worker.onmessage = makeDownloadLink;
worker.onerror = console.error;
html2canvas( target ).then( canvas => {
canvas.toBlob( (blob) => worker.postMessage( {
source: blob,
width: canvas.width / dpr, // retina?
height: canvas.height / dpr // retina?
} ) );
} );
worker.js
importScripts( "https://unpkg.com/jspdf#latest/dist/jspdf.umd.min.js" );
onmessage = ({ data }) => {
const { source, width, height } = data;
const reader = new FileReaderSync();
const data_url = reader.readAsDataURL( source );
const doc = new jspdf.jsPDF( { unit: "px", format: [ width, height ], orientation: width > height ? "l" : "p" });
doc.addImage( data_url, "PNG", 0, 0, width, height, { compression: "NONE" } );
postMessage( doc.output( "blob" ) );
};
And since StackSnippet's over-protected iframes do break html2canvas, here is an outsourced live example.

Unity 3d 2017.2.1 camera feed looks zoomed in iPad

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraFeed : MonoBehaviour {
// Use this for initialization
public Renderer rend;
void Start () {
WebCamDevice[] devices = WebCamTexture.devices;
// for debugging purposes, prints available devices to the console
for(int i = 0; i < devices.Length; i++)
{
print("Webcam available: " + devices[i].name);
}
Renderer rend = this.GetComponentInChildren<Renderer>();
// assuming the first available WebCam is desired
WebCamTexture tex = new WebCamTexture(devices[0].name);
rend.material.mainTexture = tex;
//rend.material.SetTextureScale("tex", new Vector2(1.2f, 1.2f));
tex.Play();
}
// Update is called once per frame
void Update () {
}
}
I use the above code to render camera feed on a plane. The code works fine but I see zoomed feed(around 1.2x) when deployed to iPad. How can I get the normal feed. Or How do I zoom out the camera feed
According to https://answers.unity.com/questions/1421387/webcamtexture-zoomed-in-on-ipad-pro.html if you insert this code it should scale properly:
rend.material.SetTextureScale("_Texture", new Vector2(1f, 1f))
Furthermore, it seems that you were trying something like this but your vector was off (by 0.2f) though you have commented out that line as well. You could reduce the scale even more though that could make the image just smaller if the camera couldn't support that zoomout :).

how to apply opencv filters to ARGB images in Processing?

In a part of a project, after displaying backgroundimage, I should hide part of it by creating a mask (named mask1 in my code) and making first 3000 pixels of mask1colorless. Then glare the result by utilizing blur() method of OpenCV library. The problem is that it seems that OpenCv library is ignoring opacity(alpha channel) of pixels in mask1. As a result, it is impossible to see the backgroundimage behind the blured image which has been created by OpenCV library. here is my code:
import gab.opencv.*;
OpenCV opencv;
int[] userMap;
PImage backgroundimage,mask1;
void setup() {
backgroundimage=loadImage("test.jpg");
mask1=createImage(640,480,ARGB);
opencv = new OpenCV(this,640,480);
size(640,480);
}
void draw() {
image(backgroundimage,0,0);
mask1.loadPixels();
for (int index=0; index<640*480; index++) {
if (index<30000) {
mask1.pixels[index]=color(0,0,0,0);
}
else{
mask1.pixels[index]=color(255,255,255);
}
}
mask1.updatePixels();
opencv.loadImage(mask1);
opencv.blur(8);
image(opencv.getSnapshot(), 0, 0);
}
Is there any other solution to glare mask1??

Taking square photo with react native camera

By default the react-native-camera takes photos in standard aspect ratio of the phone and outputs them in Base64 png, if the Camera.constants.CaptureTarget.memory target is set.
I am looking for a way to create square photos - either directly using the camera, or by converting the captured imagedata. Not sure if something like that is possible with React Native, or I should go entirely for native code instead.
The aspect prop changes only how the camera image is displayed in the viewfinder.
Here is my code:
<Camera
ref={(cam) => {
this.cam = cam;
}}
captureAudio={false}
captureTarget={Camera.constants.CaptureTarget.memory}
aspect={Camera.constants.Aspect.fill}>
</Camera>;
async takePicture() {
var imagedata;
try {
var imagedata = await this.cam.capture();// Base64 png, not square
} catch (err) {
throw err;
}
return imagedata;
}
Use the getSize method on the Image and pass the data to the cropImage method of ImageEditor.
Looking at the cropData object, you can see that I pass the width of the image as the value for both the width and the height, creating a perfect square image.
Offsetting the Y axis is necessary so that the center of the image is cropped, rather than the top. Dividing the height in half, and then subtracting half of the size of the image from that number ((h/2) - (w/2)), should ensure that you're always cropping from the center of the image, no matter what device you're using.
Image.getSize(originalImage, (w, h) => {
const cropData = {
offset: {
x: 0,
y: h / 2 - w / 2,
},
size: {
width: w,
height: w,
},
};
ImageEditor.cropImage(
originalImage,
cropData,
successURI => {
// successURI contains your newly cropped image
},
error => {
console.log(error);
},
);
});

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

Resources