Passing / packing attributes to web gl shader - webgl

I need help understanding how to pass an additional attribute in an existing system. I am using
nexus to render my mesh. Nexus sort of uses three.js, but down the line it actually does straight calls to web gl. I want to add a cool looking wireframe effect to it and i found this guy wireframe shader. Long story short, i added a barycentric attribute to the my shader that nexus loads. I can get the location of the attribute and all that jazz. What i am having trouble with is understanding how to modify nexus webgl calls to pass in additional data for that attribute. Code in question is this:
function readyNode(node) {
var m = node.mesh;
var n = node.id;
var nv = m.nvertices[n];
var nf = m.nfaces[n];
var model = node.model;
var vertices;
var indices;
indices = new Uint8Array(node.buffer, nv*m.vsize, nf*m.fsize);
const konSize = nv * m.vsize + 12 * nv;
vertices = new Uint8Array(konSize);
var view = new Uint8Array(node.buffer, 0, nv * m.vsize);
var v = view.subarray(0, nv*12);
vertices.set(v);
var konOff = nv*12;
var off = nv*12;
//var barycentric = addBarycentricCoordinates( indices, false );
//vertices.set(barycentric, konOff);
//var konOff = nv*12;
vertices.set(v, konOff);
konOff += nv*12;
if(m.vertex.texCoord) {
var uv = view.subarray(off, off + nv*8);
vertices.set(uv, konOff);
konOff += nv*8;
off += nv*8;
}
if(m.vertex.normal && m.vertex.color) {
var no = view.subarray(off, off + nv*6);
var co = view.subarray(off + nv*6, off + nv*6 + nv*4);
vertices.set(co, konOff );
vertices.set(no, konOff + nv*4);
}
else {
if(m.vertex.normal) {
var no = view.subarray(off, off + nv*6);
vertices.set(no, off);
}
if(m.vertex.color) {
var co = view.subarray(off, off + nv*4);
vertices.set(co, off);
}
}
var gl = node.context.gl;
var vbo = m.vbo[n] = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
var ibo = m.ibo[n] = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
}
Original code does not have barycentric part and without it, it all works. What i don't understand is how to correctly add barycentric data to the "vertices" array to pass it to the shader. The code above doesn't crash, but it does cause UVs to be out of whack. Barycentric coordinates are just a vec3, similar to position. They are just 1s and 0s. As i understand, 1 per vertex. I think I should be able to pack them in the "vertices" array with correct offsets. I even modified the above code to pass in positions instead of barycentric data, and it still doesn't work (it casues UVs to be out whack). Not sure if this enough context to reason about it.
Am i doing something wrong with offsets?
Also, do I need to de-index my data for this to work? Wireframe library does do that.
Let me know if there is more info I can provide.
P.S. Nexus is a chunking / LODing system. It loads patches over time. As far as i understand it's only one "mesh" object, it just adds more data to the buffers as it downloads it.

Related

Emgucv cannot train more than 210 images using EigenFaceRecognizer it stop writing new details or data

Good day everyone I am relatively new to OpenCV using .net wrapper Emgucv my program is simple face detection and recognition, first I train user faces, at least 20 images of 100x100pixel per user and write (EigenFaceRecognizer) the data to yml files, then load this files(user images and data in yml) before running real time recognition or comparing, it worked perfectly fine with 9 user (9x20 = 180 images). However when i try to register or train another user I notice the (EigenFaceRecognizer) stop writing the data in yml. How do we solve this? The format of my data with yml extension below
opencv_eigenfaces:
threshold: .Inf
num_components: 10
mean: !!opencv-matrix
rows: 1
cols: 4096
dt: d
data: []
The trainingData.yml https://www.dropbox.com/s/itm58o24lka9wa3/trainingData.yml?dl=0
I figure it out the problem is just not enough time in writing the data so I need to increase the delay.
private async Task LoadData()
{
outputBox.Clear();
var i = 0;
var itemData = Directory.EnumerateFiles("trainingset/", "*.bmp");
var enumerable = itemData as IList<string> ?? itemData.ToList();
var total = enumerable.Count();
_arrayNumber = new int[total];
var listMat = new List<Mat>();
foreach (var file in enumerable)
{
var inputImg = Image.FromFile(file);
_inputEmGuImage = new Image<Bgr, byte>(new Bitmap(inputImg));
var imgGray = _inputEmGuImage.Convert<Gray, byte>();
listMat.Add(imgGray.Mat);
var number = file.Split('/')[1].ToString().Split('_')[0];
if (number != "")
{
_arrayNumber[i] = int.Parse(number);
}
i++;
processImg.Image = _inputEmGuImage.ToBitmap();
outputBox.AppendText($"Person Id: {number} {Environment.NewLine}");
if (total == i)
{
fisherFaceRecognizer.Train(listMat.ToArray(), _arrayNumber);
fisherFaceRecognizer.Write(YlmPath);
// FaceRecognition.Train(listMat.ToArray(), _arrayNumber);
// FaceRecognition.Write(YlmPath);
MessageBox.Show(#"Total of " + _arrayNumber.Length + #" successfully loaded");
}
await Task.Delay(10);
}
}

Ipad not showing canvas lines properly

I have created small script for vision testing. I posted part of it on https://jsfiddle.net/jaka_87/bpadyanh/
The theory behind it: Canvas is filled with pattern.In each pattern there are 3 lines drawn. Two black ones with the white one in the middle. The white line has 2*with of one black line. When width of this lines is so small that our eye cant distinguish between these three we see only gray background (even thou the lines are still there).
I tested it on few computers. On most of them it works well (thou i saw some strange patterns on some of the older ones - one column of vertical white lines then 2-3 dark column then white one....) I was assuming that it has do do with the display/graphic card or something similar. I tested it on some some mobile devices. It works fine on my Nexus 7 and Moto G, but not on my Transformer Prime pad (strange pattern like described before - for which again I blame the tablet).
It looks absolutely horrible (by far the worst from all tested) on my Ipad and my friends Iphone. I was expecting the best result there since they are known for very good screens but the results are horrible. When the lines are wide enough its OK but when they get narrower they are merged together to one either black or white line - not shown separately.
Is there any way to fix that so it would work on iOS ??
var povecava2 = "0.04226744186046511";
var izmerjeno =1;
var razmak =3.5;
var contrast = 1;
var canvas = document.getElementById('canvas1');
var ctx = canvas.getContext('2d');
// širina canvasa
ctx.canvas.width = window.innerWidth-23;
ctx.canvas.height = window.innerHeight-70;
var sirinaopto=Math.round(200*izmerjeno*povecava2*razmak);
if(sirinaopto & 1){sirinaopto=sirinaopto;}else{ sirinaopto=sirinaopto+1;} // če je širina optotipa soda ali liha
var enota4 =((0.19892970392*130*izmerjeno*povecava2)/4).toFixed(2); // 1 kotna minuta
var center= Math.round((ctx.canvas.width-(sirinaopto))/2);
// kolkrat gre v višino
var kolkratgre = Math.floor(ctx.canvas.height/(sirinaopto));
var visina2= sirinaopto*kolkratgre;
// kolkrat gre v širino
var kolkratgrehor = Math.ceil(ctx.canvas.width/sirinaopto); if(kolkratgrehor % 2 == 0) { var kolkratgrehor=kolkratgrehor-1; }
var zacetek = (ctx.canvas.width-(kolkratgrehor*sirinaopto))/2;
ctx.rect(0,0,ctx.canvas.width,ctx.canvas.height);
ctx.fillStyle="rgb(140,140,140)";
ctx.fill();
// 90 stopinj
var canvasPattern0 = document.createElement("canvas");
canvasPattern0.width = sirinaopto;
canvasPattern0.height = sirinaopto;
var contextPattern0 = canvasPattern0.getContext("2d");
contextPattern0.mozImageSmoothingEnabled = false;
contextPattern0.imageSmoothingEnabled = false;
contextPattern0.translate((canvasPattern0.width/2)-(enota4*2),(canvasPattern0.width/2)-(10*enota4));
contextPattern0.beginPath();
contextPattern0.globalAlpha = contrast;
contextPattern0.moveTo(enota4/2,0);
contextPattern0.lineTo(enota4/2,20*enota4);
contextPattern0.lineWidth=enota4;
contextPattern0.strokeStyle = 'black';
contextPattern0.stroke();
contextPattern0.closePath();
contextPattern0.beginPath();
contextPattern0.globalAlpha = contrast;
contextPattern0.moveTo(enota4*2,0);
contextPattern0.lineTo(enota4*2,20*enota4);
contextPattern0.lineWidth=enota4*2;
contextPattern0.strokeStyle = 'white';
contextPattern0.stroke();
contextPattern0.closePath();
contextPattern0.beginPath();
contextPattern0.globalAlpha = contrast;
contextPattern0.moveTo(enota4*3.5,0);
contextPattern0.lineTo(enota4*3.5,20*enota4);
contextPattern0.lineWidth=enota4;
contextPattern0.strokeStyle = 'black';
contextPattern0.stroke();
contextPattern0.closePath();
// 0 stopinj
var canvasPattern1 = document.createElement("canvas");
canvasPattern1.width = sirinaopto;canvasPattern1.height = sirinaopto;
var contextPattern1 = canvasPattern1.getContext("2d");
contextPattern1.translate(sirinaopto/2, sirinaopto/2);
contextPattern1.rotate(90*Math.PI/180);
contextPattern1.drawImage(canvasPattern0, sirinaopto*(-0.5), sirinaopto*(-0.5));
contextPattern1.save();
var imagesLoaded = [];
imagesLoaded.push(canvasPattern0);
imagesLoaded.push(canvasPattern1);
var randomPattern = function(imgWidth, imgHeight, areaWidth, areaHeight) {
// either set a defined width/height for our images, or use the first one's
imgWidth = sirinaopto;
imgHeight = sirinaopto;
// restrict the randmoness size by using an areaWidth/Height
areaWidth = ctx.canvas.width;
areaHeight = visina2;
// create a buffer canvas
var patternCanvas = canvas.cloneNode(true);
var patternCtx = patternCanvas.getContext('2d');
patternCanvas.width = areaWidth;
patternCanvas.height = areaHeight;
// var xloops = Math.ceil(areaWidth / imgWidth);
var xloops = Math.ceil(areaWidth / imgWidth); if(xloops % 2 == 0) { var xloops=xloops-1; }
var yloops = Math.ceil(areaHeight / imgHeight);
//alert(xloops);
for (var xpos = 0; xpos < xloops; xpos++) {
for (var ypos = 0; ypos < yloops; ypos++) {
var img = imagesLoaded[Math.floor(Math.random() * imagesLoaded.length)];
patternCtx.drawImage(img, (xpos * imgWidth)+zacetek, (ypos * imgHeight), imgWidth, imgHeight);
}
}
// create a pattern from this randomly created image
return patternCtx.createPattern(patternCanvas, 'repeat');
}
var draw = function() {
//create the random pattern (should be moved out of the draw)
var patt = randomPattern(sirinaopto,sirinaopto);
ctx.fillStyle = patt;
ctx.fillRect(0,0,ctx.canvas.width, visina2)
};
draw();

Printing in Openlayers 3 (pdf)

I have made a printing tools for openlayers 3 which prints in PDF format. Here is my code to print in pdf.
var dims = {
a0: [1189, 841],
a1: [841, 594],
a2: [594, 420],
a3: [420, 297],
a4: [297, 210],
a5: [210, 148]
};
var exportElement = document.getElementById('export-pdf');
exportElement.addEventListener('click', function(e) {
if (exportElement.className.indexOf('disabled') > -1) {
return;
}
exportElement.className += ' disabled';
var format = document.getElementById('format').value;
var resolution = document.getElementById('resolution').value;
var buttonLabelElement = document.getElementById('button-label');
var label = buttonLabelElement.innerText;
var dim = dims[format];
var width = Math.round(dim[0] * resolution / 25.4);
var height = Math.round(dim[1] * resolution / 25.4);
var size = /** #type {ol.Size} */ (map.getSize());
var extent = map.getView().calculateExtent(size);
map.once('postcompose', function(event) {
//var tileQueue = map.getTileQueue();
// To prevent potential unexpected division-by-zero
// behaviour, tileTotalCount must be larger than 0.
//var tileTotalCount = tileQueue.getCount() || 1;
var interval;
interval = setInterval(function() {
//var tileCount = tileQueue.getCount();
//var ratio = 1 - tileCount / tileTotalCount;
//buttonLabelElement.innerText = ' ' + (100 * ratio).toFixed(1) + '%';
//if (ratio == 1 && !tileQueue.getTilesLoading()) {
clearInterval(interval);
buttonLabelElement.innerText = label;
var canvas = event.context.canvas;
var data = canvas.toDataURL('image/jpeg');
var pdf = new jsPDF('landscape', undefined, format);
pdf.addImage(data, 'JPEG', 0, 0, dim[0], dim[1]);
pdf.save('map.pdf');
map.setSize(size);
map.getView().fitExtent(extent, size);
map.renderSync();
exportElement.className =
exportElement.className.replace(' disabled', '');
// }
}, 100);
});
map.setSize([width, height]);
map.getView().fitExtent(extent, /** #type {ol.Size} */ (map.getSize()));
map.renderSync();
}, false);
I can print in PDF when I have only OSM Layer but when I add local layers from my geoserver I can't print anything and the whole application is freezed.
Can anyone tell me what am I doing wrong here?
I am using jspdf to print pdf.
AJ
Your problem is that you load imagery from other domains, and haven't configured them for CORS. See https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image for a description on cross origin image use.
In order to get data out of the canvas, all images put into it must be from the same domain or transmitted with the appropriate Access-Control-Allow-Origin header.
I would investigate how to set up your server to serve the map imagery with those headers. You should also take a look at the crossOrigin option on your ol3 sources.
There is few solutions for CORS.
Very simple solution is to proxy OSM requests through your backend server (user <-> backend <-> OSM), but then we have little more server load.

How to clone an object3d in Three.js?

I want to clone a model loaded with loader, I found this issue on github,but the solution doesn't work. It seems there has been an structural change in Object3D.
How can I clone an Object3D in current stable version of Three.js?
In this new version of three.js you have a method clone.
For example, I use a queen from chess and I had to duplicate multiple times:
// queen is a mesh
var newQueen = queen.clone();
// make sure to re position to be able to see the new queen!
newQueen.position.set(100,100,100); // or any other coordinates
It will work with any mesh.
I used three.js r61.
Actually mrdoob's answer is your answer...
The loader output a geometry you use to create a mesh.
You need to create a new mesh with the loader-created mesh's geometry and material.
for ( var i = 0; i < 10; i ++ ) {
var mesh = new THREE.Mesh( loadedMesh.geometry, loadedMesh.material );
mesh.position.set( i * 100, 0, 0 );
scene.add( mesh );
}
You want to clone a Mesh and not an Object3D because the output of the loader is a Mesh.
I found one fast solution (not the most efficient)
The GLTFLoader uses the THREE.FileLoader() method internally, which allows you to cache files.
To do so, you need to add this line before you create an instance of the GLTFLoader
THREE.Cache.enabled = true;
Then you can load multiple times the same model, but only the first time will take longer, for example:
THREE.Cache.enabled = true;
var loader = new GLTFLoader();
var deeplyClonedModels = [];
for( var i = 0; i < 10; i++ ){
loader.load('foo.gltf', function ( gltf ) {
deeplyClonedModels.push(gltf.scene);
});
}

How can I detect the number of lit faces on a mesh?

I have a number of objects arranged in a THREE.scene, and I want to calculate or retrieve a relative value indicating how much light each object is receiving from a single PointLight source. Simplified example:
With the light positioned at the camera, Block 1's value might be 0.50 since 3 of 6 faces are completely exposed, while 2 is ~0.33 and 3 is ~1.67.
I could probably do this the hard way by drawing a ray from the light toward the center of each face and looking at the intersects, but I'm assuming it's possible to directly retrieve the light level of each face.
This code takes the object's global matrix in consideration.
var amount = 0;
var rotationMatrix = new THREE.Matrix4();
var vector = new THREE.Vector3();
var centroid = new THREE.Vector3();
var normal = new THREE.Vector3();
for ( var i = 0; i < objects.length; i ++ ) {
var object = objects[ i ];
rotationMatrix.extractRotation( object.matrixWorld );
for ( var j = 0; j < object.geometry.faces.length; j ++ ) {
var face = object.geometry.faces[ j ];
centroid.copy( face.centroid );
object.matrixWorld.multiplyVector3( centroid );
normal.copy( face.normal );
rotationMatrix.multiplyVector3( normal );
vector.sub( light.position, centroid ).normalize();
if ( normal.dot( vector ) > 0 ) amount ++;
}
}
I think something like this should do the trick.
var amount = 0;
var faces = mesh.geometry.faces;
for ( var i = 0; i < geometry.faces.length; i ++ ) {
if ( geometry.faces[ i ].normal.dot( light.position ) > 0 ) amount ++;
}
(Warning: Brute force method!)
I'm including this for reference since it's what I'm currently using to meet all of the requirements described in the question. This function considers a face unlit if its center is not directly visible from the light's position.
I have no rotation matrix to consider for my application.
function getLightLevel(obj) {
/* Return percentage of obj.geometry faces exposed to light */
var litCount = 0;
var faces = obj.geometry.faces;
var faceCount = faces.length;
var direction = new THREE.Vector3();
var centroid = new THREE.Vector3();
for (var i=0; i < faceCount; i++) {
// Test only light-facing faces (from mrdoob's first answer).
if (faces[i].normal.dot(light.position) > 0) {
centroid.add(obj.position, faces[i].centroid);
direction.sub(centroid, light.position).normalize();
// Exclude face if centroid is obscured by another object.
var ray = new THREE.Ray(light.position, direction);
var intersects = ray.intersectObjects(objects);
if (intersects.length > 0 && intersects[0].face === faces[i]) {
litCount ++;
}
}
}
return litCount / faceCount;
}

Resources