D3.js + webgl (2D) Please suggest - webgl

Hey guys this is the what I am working on.
http://bioverto.org
If you choose any
Upload from a database ->mint full network->choose any graph.
then you can view a graph.
I am currently using d3.js for the graphs.
It works well for node length less than 1000.as size increases its performance decreases.
But now I need to deploy graphs on size more than 1000 nodes and edges.
To boost performance I was thinking of using webgl 2D.
Can you please suggest how should I proceed.
I am new to web GL.
Thank you in advanced.

Related

Effective MacOS/iOS framework for rendering custom maps (Spritekit or raw Metal)?

trying to keep this question as specific as I can to avoid being closed as too broad. My end goal is to render marine (as in nautical) maps. See image below as a reference. I've researched some of the various Apple frameworks to see what suits this best. My data input is effectively an array of arrays where each child array represents a cartographic feature (think an island or boat dock). I started w/ Core Graphics as it has a very simple API however it's performance is poor (it was taking > 100ms for a single layer of data when I can expect 10-20 layers on average).
Which brings me to my question: would SpriteKit be an effective framework for handling this workload? My preference is to avoid learning Metal but if fellow devs recommend this approach I will invest the time. SpriteKit seems to be able to handle this- I'll probably be working with a few thousand to a few hundred thousand points/vertices at a time. I dont need any complex animations as the map is static in terms of display. Any inputs appreciated!
GeoJSONMap
Build maps from GeoJSON with MapKit or SpriteKit.
SpriteKit maps can be displayed offline and/or as planes in ARKit.
I loaded a city map resulting in 256 static SpriteKit nodes made from filled GeoJSON polygons, it gives me only 3.7 FPS on iPhone XS. Perhaps some optimisation is possible, but I did not try.

Frame Rate issue Apple SpriteKit

I'm currently making an app for an assessment which generates a maze using Recursive Backtracking. It's a 25x25 grid maze, with each wall being a separate SKSpriteNode (I read up that using SKShapeNodes was not efficient).
However, there are about 1300 nodes in the scene, which is causing some frame rate issues, even on my iPhone X. It's currently idling at about 15-30 fps, which really isn't ideal.
Are there any ideas on how to either cache SKSpriteNodes to produce better performance? I'm probably overlooking many things, and not creating walls in the most efficient way but the frames seem way too low to be correct?
If anyone would be able to suggest or nudge me in the right location that would be a huge help.
I highly recommend using SKTextures for repeated, identical images. See Creating a Textured Sprite Node.
For optimal performance, create sprites before compile time and put them in a texture atlas in your asset catalog. For creating texture atlases, see the documentation for SKTextureAtlas.

How align(register) and merge point clouds to get full 3d model?

I want to get 3d model of some real word object.
I have two web cams and using openCV and SBM for stereo correspondence I get point cloud of the scene, and filtering through z I can get point cloud only of object.
I know that ICP is good for this purprose, but it needs point clouds to be initally good aligned, so it is combined with SAC to achieve better results.
But my SAC fitness score it too big smth like 70 or 40, also ICP doesn't give good results.
My questions are:
Is it ok for ICP if I just rotate the object infront of cameras for obtaining point clouds? What angle of rotation must be to achieve good results? Or maybe there are better way of taking pictures of the object for getting 3d model? Is it ok if my point clouds will have some holes? What is maximal acceptable fitness score of SAC for good ICP, and what is maximal fitness score of good ICP?
Example of my point cloud files:
https://drive.google.com/file/d/0B1VdSoFbwNShcmo4ZUhPWjZHWG8/view?usp=sharing
My advice and experience is that you already have rgb images or grey. ICP is an good application for optimising the point cloud, but has some troubles aligning them.
First start with rgb odometry (through feature points aligning the point cloud (rotated from each other)) then use and learn how ICP works with the already mentioned point cloud library. Let rgb features giving you a prediction and then use ICP to optimize that when possible.
When this application works think about good fitness score calculation. If that all works use the trunk version of ICP and optimize the parameter. After this all been done You have code that is not only fast, but also with the a low error of going wrong.
The following post is explain what went wrong.
Using ICP, we refine this transformation using only geometric information. However, here ICP decreases the precision. What happens is that ICP tries to match as many corresponding points as it can. Here the background behind the screen has more points that the screen itself on the two scans. ICP will then align the clouds to maximize the correspondences on the background. The screen is then misaligned
https://github.com/introlab/rtabmap/wiki/ICP

WebGL earth : how to make clouds

Problem
I would like to build a realistic view of the earth from low-orbit (here ~300km) with WebGL. That is to say, on the web, with all that it implies and moreover, on mobile. Do not stop reading here : to make this a little less difficult, the user can look everywhere but not pan, so the view does only concern a small 3000km-wide area. But the view follows a satellite so few minutes later, the user comes back to where it was before, with the slight shift of the earth's rotation, etc. So the clouds cannot be at the same place all the time.
I have actually yet been able to include city lights, auroras, lightnings... except clouds. I have seen a lot of demos of realtime rendering passionates and researchers, but none of them had a nice, realistic cloud layer. However I am sure I am the 100(...)00th person thinking about doing this, so please enlight me.
Few questions are implied :
what input to use for clouds ? Meteorological live data ?
what rendering possibilities ? A transparent layer with a cloud map, modified with shaders ? Few transparent layers to get a feeling of volumetric rendering ? But how to cast shadows one to another : the only solution would then be using a mesh ? Or shadows could be procedurally computed and mapped on the server every x minutes ?
Few specifications
Here are some ideas summing up what I have not seen yet, sorted by importance :
clouds hide 60% of the earth.
clouds scatter cities & lightnings'lights and have rayleigh scattering at night.
At this distance the parallax effect is visible and even quite awesome with the smallest clouds.
As far as i've seen, even expensive realtime meteorological online resources are not useful : they aim rainy or stormy clouds with help of UV and IR lightwaves, so they don't catch 100% of them and don't give the 'normal' view we all know. Moreover the rare good cloud textures shot in visible light hardly differentiate ground from clouds : sometimes a 5000km-long coast stands among nowhere. A server may be able to use those images to create better textures.
When I look at those pictures I imagine that the lest costy way would be to merge few nice cloud meshes from a database containing different models, then slightly transform those meshes inside a shader while the user passes over. If he is still here 90 minutes later when he comes back, no matter if the model are not the same again. However a hurrican cannot disappear.
What do you think about this ?
For such effects there is probably just one way to do it properly and that is:
Voxel maps + Volume rendering probably with Back-Ray-tracer rendering
As your position is fixed so it should not be so hard on memory requirements. You need to implement both MIE and Rayleigh scattering. Scattering can be simplified a lot and still looking good see
simplified atmosphere scattering
realistic n-body solar system simulation
voxel maps handle light gaps,shadows and scattering relatively easy but need a lot of memory and computational power. All the other 2D techniques just usually painfully work around what 3D voxel maps do natively with little effort. For example:
Voxel map shadows
Procedural cloud map generators
You need this for each type of clouds so you have something to render. There are libs/demos/examples out there see:
first relevant google hit

SpriteKit - minimize quadcount

I've been trying to teach myself SpriteKit and jumped into Raywenderlich's tutorials which said that QuadCount should be minimized for a better performance. I turned on showQuadCount, showDrawCount, showNodeCount for debugging. However, I saw quadcount is always equals to node count. Anyone can help me explain in a simple way what quadcount really is and give me an example on that quadcount is different from nodecount. (I did search google but I can not understand, so please do not give me a link without explaination). Thanks so much.
Every node that draws something draws a quad (two triangles).
So only SKNode nodes which don't draw anything will not increase the quad count.
Also, quad count nearly isn't everything. It's more important to support Sprite Kit's internal batching by using texture atlases, and avoiding child nodes that use different texture atlas than their parent, otherwise this would interrupt batching.

Resources