This issue is an Extension of Multiple GLTF loading and Merging on server side.
I am trying to merge multiple GLTF files that have some common nodes too.
The answer helped me for the merging the files and I combined the scenes by following code and it rendered perfectly
const scenes = root.listScenes()
const scene0 = scenes[0]
root.setDefaultScene(scene0);
if (scenes.length > 1) {
for (let i = 1; i < scenes.length; i++) {
let scene = scenes[i];
let nodes = scene.listChildren()
for (let j = 0; j < nodes.length; j++) {
scene0.addChild(nodes[j]);
}
}
}
root.listScenes().forEach((b, index) => index > 0 ? b.dispose() : null);
My issue is all the data that was common in the GLTFs is duplicated and this will create issue in animation when root bones are required to change. Is there a way to merge so that the common nodes are not duplicated ? I am also trying for some custom merge.
const gltfLoader = () => {
const document = new Document();
const root = document.getRoot();
document.merge(io.read(filePaths[0]));
let model;
for (let i = 1; i < filePaths.length; i++) {
const inDoc = new Document();
inDoc.merge(io.read(filePaths[i]));
model = inDoc.getRoot().listScenes()[0];
model.listChildren().forEach((child) => {
mergeStructure(root.listScenes()[0], child);
});
}
io.write('output.gltf', document);
}
const mergeStructure = (parent, childToMerge) => {
let contains = false;
parent.listChildren().forEach((child) => {
if (child.t === childToMerge.t && !contains && child.getName() === childToMerge.getName()) {
childToMerge.listChildren().forEach((subChild) => {
mergeStructure(child, subChild);
});
contains = true;
}
});
if (!contains) {
console.log("Adding " + childToMerge.getName() + " to " + parent.getName() + " as child")
parent.addChild(childToMerge);
}
}
But this merge is not working due to Error: Cannot link disconnected graphs/documents.
I am newbie to 3D modelling. Some direction would be great.
Thanks!
The error you're seeing above is occurring because the code attempts to move individual resources — e.g. Nodes — from one glTF Document to another. This isn't possible, each Document manages its resource graph internally, but an equivalent workflow would be:
Load N files and merge into one document (with N scenes).
import { Document, NodeIO } from '#gltf-transform/core';
const io = new NodeIO();
const document = new Document();
for (const path of paths) {
document.merge(io.read(path));
}
Iterate over all of the scenes, moving their children to some common scene:
const root = document.getRoot();
const mainScene = root.listScenes()[0];
for (const scene of root.listScenes()) {
if (scene === mainScene) continue;
for (const child of scene.listChildren()) {
// If conditions are met, append child to `mainScene`.
// Doing so will automatically detach it from the
// previous scene.
}
scene.dispose();
}
Clean up any remaining unmerged resources.
import { prune } from '#gltf-transform/functions';
await document.transform(prune());
Related
In openlayers 2.8 there was a threshold associated with the cluster strategy as per ticket https://trac.osgeo.org/openlayers/ticket/1815.
In openlayers 3 there is no mention of it anywhere (and the strategy paradigm seems to be gone as well).
http://openlayers.org/en/master/apidoc/ol.source.Cluster.html
Does anyone know if there exist a ticket for this feature?
The paradigm has changed considerably. In OpenLayers 3 you create a new layer, with a cluster style, and the "threshold" is set as a maxResolution, or minResolution in the layer's options.
Similar to:
var clusterLayer = new ol.layer.Vector({
visible: true,
zIndex: insightMap.totalServcies - element.SortOrder,
id: Id,
serviceId: element.Id,
minResolution: clusteringThreshold,
cluster: true,
});
You can also use minZoom and maxZoom according to the documentaiton, but I've encountered issues with them performing consistently.
Update
It's actually possible to have a proper cluster threshold without recompiling the library. You need to use the geometry of each feature (from the features property of the cluster) in the style function.
const noClusterStyles = [];
vectorLayer.setStyle(feature => {
const features = feature.get('features');
if (features.length > 5) {
return clusterStyle;
} else {
for (let i = 0; ii = features.length; i < ii; ++i) {
const clone = noClusterStyles[i] ? noClusterStyles[i] : noClusterStyle.clone();
clone.setGeometry(features[i].getGeometry());
noClusterStyles[i] = clone;
}
noClusterStyles.length = features.length;
return noClusterStyles;
}
});
Thank you to #ahocevar for the code snippet.
Another solution would require modifying the OL3 library itself.
The ol.source.Cluster.prototype.cluster_ function has the following code snippet :
var neighbors = this.source_.getFeaturesInExtent(extent);
ol.DEBUG && console.assert(neighbors.length >= 1, 'at least one neighbor found');
neighbors = neighbors.filter(function(neighbor) {
var uid = ol.getUid(neighbor).toString();
if (!(uid in clustered)) {
clustered[uid] = true;
return true;
} else {
return false;
}
});
// Add the following
// If one element has more too many neighbors, register it as a cluster of one
// Size-based styling should be handled separately, in the layer style function
let THRESHOLD= 3;
if(neighbors.length > THRESHOLD) {
this.features_.push(this.createCluster_(neighbors));
} else {
for(var j = 0 ; j < neighbors.length ; j++) {
this.features_.push(this.createCluster_([neighbors[j]]));
}
}
I figured if the devtool can list all created IndexedDB, then there should be an API to retrieve them...?
Dose anyone know how I get get a list of names with the help of a firefox SDK?
I did dig into the code and looked at the source. unfortunately there wasn't any convenient API that would pull out all the databases from one host.
The way they did it was to lurk around in the user profiles folder and look at all folder and files for .sqlite and make a sql query (multiple times in case there is an ongoing transaction) to each .sqlite and ask for the database name
it came down this peace of code
// striped down version of: https://dxr.mozilla.org/mozilla-central/source/devtools/server/actors/storage.js
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
"use strict";
const {async} = require("resource://gre/modules/devtools/async-utils");
const { setTimeout } = require("sdk/timers");
const promise = require("sdk/core/promise");
// A RegExp for characters that cannot appear in a file/directory name. This is
// used to sanitize the host name for indexed db to lookup whether the file is
// present in <profileDir>/storage/default/ location
const illegalFileNameCharacters = [
"[",
// Control characters \001 to \036
"\\x00-\\x24",
// Special characters
"/:*?\\\"<>|\\\\",
"]"
].join("");
const ILLEGAL_CHAR_REGEX = new RegExp(illegalFileNameCharacters, "g");
var OS = require("resource://gre/modules/osfile.jsm").OS;
var Sqlite = require("resource://gre/modules/Sqlite.jsm");
/**
* An async method equivalent to setTimeout but using Promises
*
* #param {number} time
* The wait time in milliseconds.
*/
function sleep(time) {
let deferred = promise.defer();
setTimeout(() => {
deferred.resolve(null);
}, time);
return deferred.promise;
}
var indexedDBHelpers = {
/**
* Fetches all the databases and their metadata for the given `host`.
*/
getDBNamesForHost: async(function*(host) {
let sanitizedHost = indexedDBHelpers.getSanitizedHost(host);
let directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"default", sanitizedHost, "idb");
let exists = yield OS.File.exists(directory);
if (!exists && host.startsWith("about:")) {
// try for moz-safe-about directory
sanitizedHost = indexedDBHelpers.getSanitizedHost("moz-safe-" + host);
directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"permanent", sanitizedHost, "idb");
exists = yield OS.File.exists(directory);
}
if (!exists) {
return [];
}
let names = [];
let dirIterator = new OS.File.DirectoryIterator(directory);
try {
yield dirIterator.forEach(file => {
// Skip directories.
if (file.isDir) {
return null;
}
// Skip any non-sqlite files.
if (!file.name.endsWith(".sqlite")) {
return null;
}
return indexedDBHelpers.getNameFromDatabaseFile(file.path).then(name => {
if (name) {
names.push(name);
}
return null;
});
});
} finally {
dirIterator.close();
}
return names;
}),
/**
* Removes any illegal characters from the host name to make it a valid file
* name.
*/
getSanitizedHost: function(host) {
return host.replace(ILLEGAL_CHAR_REGEX, "+");
},
/**
* Retrieves the proper indexed db database name from the provided .sqlite
* file location.
*/
getNameFromDatabaseFile: async(function*(path) {
let connection = null;
let retryCount = 0;
// Content pages might be having an open transaction for the same indexed db
// which this sqlite file belongs to. In that case, sqlite.openConnection
// will throw. Thus we retey for some time to see if lock is removed.
while (!connection && retryCount++ < 25) {
try {
connection = yield Sqlite.openConnection({ path: path });
} catch (ex) {
// Continuously retrying is overkill. Waiting for 100ms before next try
yield sleep(100);
}
}
if (!connection) {
return null;
}
let rows = yield connection.execute("SELECT name FROM database");
if (rows.length != 1) {
return null;
}
let name = rows[0].getResultByName("name");
yield connection.close();
return name;
})
};
module.exports = indexedDBHelpers.getDBNamesForHost;
If anyone want to use this then here is how you would use it
var getDBNamesForHost = require("./getDBNamesForHost");
getDBNamesForHost("http://example.com").then(names => {
console.log(names);
});
Think it would be cool if someone were to build a addon that adds indexedDB.mozGetDatabaseNames to work the same way as indexedDB.webkitGetDatabaseNames. I'm not doing that... will leave it up to you if you want. would be a grate dev tool to have ;)
I need some help with my CasperJS script.
I don't know how can I get all my pictures as an array from the current folder.
And how to loop to insert each one in the correct input.
Mmmh, difficult to explain so there is my starting code.
I put comment where I have problem.
var casper = require('casper').create();
casper.start('http://imgchili.net/', function() {
// Get a list with all the picture from the current folder.
// For each picture, click this button.
this.mouseEvent('click', 'input.button1:nth-child(6)');
this.fillSelectors('form#upload_form', {
// Another loop here.
'.grey > input:nth-child(2)': /* First picture */,
'.grey > input:nth-child(4)': /* Second picture */,
'.grey > input:nth-child(6)': /* Third picture */
}, true);
casper.capture('captureTest.png');
});
// 8s can be too low if I have a lot of pictures!
casper.wait(8000, function() {
casper.capture('captureResult.png');
})
casper.then(function() {
this.echo(this.fetchText('textarea.input_field:nth-child(11)'));
})
casper.run();
EDIT:
Thanks it helps me a lot. But I have problem to loop the inputs.
var fs = require('fs'),
casper = require('casper').create(),
myImages = fs.list(fs.workingDirectory + '/img');
casper.start('http://imgchili.net/', function() {
// For each image, click to add a new upload input.
// Begin with 2 because 0 = "."" and 1 = "..".
for (var i = 2; i < myImages.length; i++) {
this.mouseEvent('click', 'input.button1:nth-child(6)');
}
// It doesn't work and show no error...
for (var i = 2; i < myImages.length; i++) {
j = i*2;
input = '.grey > input:nth-child(' + j + ')';
this.fillSelectors('form#upload_form', {
input : '/img/' + myImages[i],
}, false);
// Even this part doesn't work
console.log('i = ' + i + ' & imgName = ' + myImages[i]);
}
});
casper.then(function() {
casper.capture('result.png');
});
casper.run();
You can use PhantomJS file system to get your current folder files. Here is the API Documentation.
The following gets your current folders files, and then filters all .png's into a new array. You can then use a loop to click the button for each image. You will have to alter / add your own code because this does not accomplish the upload task. This will should help you though.
// vars
var fs = require('fs'); // reference to phantomJS file system
var myFolder = fs.list(fs.workingDirectory); // gets all files in current folder
var myImages = []; // only images from current folder
var url = 'http://imgchili.net/'; // starting url
var fileType = '.png'; // image file type to filter by
casper.then(function() {
// create array of just images
for (var i = 0; i < myFolder.length; i++) {
if (myFolder[i].indexOf(fileType) != -1) {
myImages.push(myFolder[i]);
}
}
// click for each image
for (i = 0; i < myImages.length; i++) {
this.mouseEvent('click', 'input.button1:nth-child(6)');
}
});
// wait for images to be uploaded
// set timeout or defaults to caspers step timeout
casper.waitForSelector('#page_body', function() {
casper.capture('captureResult.png');
}, 15000);
From the API page, I gather there's no function for what I'm trying to do. I want to read text from a file storing it as a list of strings, manipulate the text, and save the file. The first part is easy using the function:
abstract List<String> readAsLinesSync([Encoding encoding = Encoding.UTF_8])
However, there is no function that let's me write the contents of the list directly to the file e.g.
abstract void writeAsLinesSync(List<String> contents, [Encoding encoding = Encoding.UTF_8, FileMode mode = FileMode.WRITE])
Instead, I've been using:
abstract void writeAsStringSync(String contents, [Encoding encoding = Encoding.UTF_8, FileMode mode = FileMode.WRITE])
by reducing the list to a single string. I'm sure I could also use a for loop and feed to a stream line by line. I was wondering two things:
Is there a way to just hand the file a list of strings for writing?
Why is there a readAsLinesSync but no writeAsLinesSync? Is this an oversight or a design decision?
Thanks
I just made my own export class that handles writes to a file or for sending the data to a websocket.
Usage:
exportToWeb(mapOrList, 'local', 8080);
exportToFile(mapOrList, 'local/data/data.txt');
Class:
//Save data to a file.
void exportToFile(var data, String filename) =>
new _Export(data).toFile(filename);
//Send data to a websocket.
void exportToWeb(var data, String host, int port) =>
new _Export(data).toWeb(host, port);
class _Export {
HashMap mapData;
List listData;
bool isMap = false;
bool isComplex = false;
_Export(var data) {
// Check is input is List of Map data structure.
if (data.runtimeType == HashMap) {
isMap = true;
mapData = data;
} else if (data.runtimeType == List) {
listData = data;
if (data.every((element) => element is Complex)) {
isComplex = true;
}
} else {
throw new ArgumentError("input data is not valid.");
}
}
// Save to a file using an IOSink. Handles Map, List and List<Complex>.
void toFile(String filename) {
List<String> tokens = filename.split(new RegExp(r'\.(?=[^.]+$)'));
if (tokens.length == 1) tokens.add('txt');
if (isMap) {
mapData.forEach((k, v) {
File fileHandle = new File('${tokens[0]}_k$k.${tokens[1]}');
IOSink dataFile = fileHandle.openWrite();
for (var i = 0; i < mapData[k].length; i++) {
dataFile.write('${mapData[k][i].real}\t'
'${mapData[k][i].imag}\n');
}
dataFile.close();
});
} else {
File fileHandle = new File('${tokens[0]}_data.${tokens[1]}');
IOSink dataFile = fileHandle.openWrite();
if (isComplex) {
for (var i = 0; i < listData.length; i++) {
listData[i] = listData[i].cround2;
dataFile.write("${listData[i].real}\t${listData[i].imag}\n");
}
} else {
for (var i = 0; i < listData.length; i++) {
dataFile.write('${listData[i]}\n');
}
}
dataFile.close();
}
}
// Set up a websocket to send data to a client.
void toWeb(String host, int port) {
//connect with ws://localhost:8080/ws
//for echo - http://www.websocket.org/echo.html
if (host == 'local') host = '127.0.0.1';
HttpServer.bind(host, port).then((server) {
server.transform(new WebSocketTransformer()).listen((WebSocket webSocket) {
webSocket.listen((message) {
var msg = json.parse(message);
print("Received the following message: \n"
"${msg["request"]}\n${msg["date"]}");
if (isMap) {
webSocket.send(json.stringify(mapData));
} else {
if (isComplex) {
List real = new List(listData.length);
List imag = new List(listData.length);
for (var i = 0; i < listData.length; i++) {
listData[i] = listData[i].cround2;
real[i] = listData[i].real;
imag[i] = listData[i].imag;
}
webSocket.send(json.stringify({"real": real, "imag": imag}));
} else {
webSocket.send(json.stringify({"real": listData, "imag": null}));
}
}
},
onDone: () {
print('Connection closed by client: Status - ${webSocket.closeCode}'
' : Reason - ${webSocket.closeReason}');
server.close();
});
});
});
}
}
I asked Mads Agers about this. He works on the io module. He said that he decided not to add writeAsLines because he didn't find it useful. For one it is trivial to write the for loop and the other thing is that you have to parameterize it which the kind of line separator that you want to use. He said he can add it if there is a strong feeling that it would be valuable. He didn't immediately see a lot of value in it.
I've got the following working A* code in C#:
static bool AStar(
IGraphNode start,
Func<IGraphNode, bool> check,
out List<IGraphNode> path)
{
// Closed list. Hashset because O(1).
var closed =
new HashSet<IGraphNode>();
// Binary heap which accepts multiple equivalent items.
var frontier =
new MultiHeap<IGraphNode>(
(a, b) =>
{ return Math.Sign(a.TotalDistance - b.TotalDistance); }
);
// Some way to know how many multiple equivalent items there are.
var references =
new Dictionary<IGraphNode, int>();
// Some way to know which parent a graph node has.
var parents =
new Dictionary<IGraphNode, IGraphNode>();
// One new graph node in the frontier,
frontier.Insert(start);
// Count the reference.
references[start] = 1;
IGraphNode current = start;
do
{
do
{
frontier.Get(out current);
// If it's in the closed list or
// there's other instances of it in the frontier,
// and there's still nodes left in the frontier,
// then that's not the best node.
} while (
(closed.Contains(current) ||
(--references[current]) > 0) &&
frontier.Count > 0
);
// If we have run out of options,
if (closed.Contains(current) && frontier.Count == 0)
{
// then there's no path.
path = null;
return false;
}
closed.Add(current);
foreach (var edge in current.Edges)
{
// If there's a chance of a better path
// to this node,
if (!closed.Contains(edge.End))
{
int count;
// If the frontier doesn't contain this node,
if (!references.TryGetValue(edge.End, out count) ||
count == 0)
{
// Initialize it and insert it.
edge.End.PathDistance =
current.PathDistance + edge.Distance;
edge.End.EstimatedDistance = CalcDistance(edge.End);
parents[edge.End] = current;
frontier.Insert(edge.End);
references[edge.End] = 1;
}
else
{
// If this path is better than the existing path,
if (current.PathDistance + edge.Distance <
edge.End.PathDistance)
{
// Use this path.
edge.End.PathDistance = current.PathDistance +
edge.Distance;
parents[edge.End] = current;
frontier.Insert(edge.End);
// Keeping track of multiples equivalent items.
++references[edge.End];
}
}
}
}
} while (!check(current) && frontier.Count > 0);
if (check(current))
{
path = new List<IGraphNode>();
path.Add(current);
while (current.PathDistance != 0)
{
current = parents[current];
path.Add(current);
}
path.Reverse();
return true;
}
// Yep, no path.
path = null;
return false;
}
How do I make it faster? No code samples, please; that's a challenge I've set myself.
Edit: To clarify, I'm looking for any advice, suggestions, links, etc. that apply to A* in general. The code is just an example. I asked for no code samples because they make it too easy to implement the technique(s) being described.
Thanks.
Have you looked at this page or this page yet? They have plenty of helpful optimization tips as well as some great information on A* in general.
Change to using a Random Meldable Queue for the heap structure. Since you wanted a programming challenge, I won't show you how I changed the recursive Meld method to not be recursive. That's the trick to getting speed out of that structure. More info in Gambin's paper "Randomized Meldable Priority Queues" (search on the web for that).