Create a floor plan in Compose - android-jetpack-compose

I need to create a floor plan in Android, which is basically the same problem reported by Create floor plan in Android.
I'm currently using the Canvas component to draw it, but it's not scaling well when I got hundreds of rectangles.
This floor plan is also dynamic, and I need to change some styling as the server changes its representation.

Related

How to align SCNScene to a physical table using ARKit?

I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

Is it possible to emulate a polarization filter during image processing, using C++ or OpenCV?

I've looked all over Stack and other sources, but I haven't seen any code that seems to successfully emulate what a polarization filter does, reducing glare. The application I want for this code won't allow for a physical filter, so I was wondering if anyone had tried this.
I'm using OpenCV image processing (mat) in C++ on an Android platform, and glare is interfering with the results I'm trying to get. Imagine a lost object you're trying to find based on a finite set of Red/Green/Blue values; if the object is smooth, glare would render bad results. And that's my current problem.
OK, no, there's no virtual polarization that can be accomplished just with code. It's possible to find (via image color saturation) glare spots on shiny objects, and those can be overwritten with nearby pixels without glare, but that's not the same thing as real polarization. That requires a physical, metal mesh in front of the lens, or sensor, to eliminate those stray light waves that create glare.
Tell you what. The person who invents the virtual polarization filter, using just code, will be an instant billionaire since every cell phone and digital camera company will want to license the patent.

What kind of chart is suitable for work flow? and it should be free

I have a requirement in my product to create an widget to have workflow kind of process.
If you see attached image , I am looking for similar kind of chart.
As we have already fusion charts with us, is it possible to have similar stuff?
Since you already have FusionCharts, I would recommend using the Drag Node Chart. The chart is exactly of the type you need and the dragging functionality can be easily turned off!
http://www.fusioncharts.com/charts/drag-node-charts/
In fact, one of the examples look very close to the image in your question and as far as I can tell, the chart can be customised to look almost same -
http://www.fusioncharts.com/charts/drag-node-charts/
Only downside that I can think of is that you will have to manually calculate the position of your workflow nodes (the connectors will connect automatically.) However, the positions can be easily computed by writing simple function on the beforeDataUpdate event fired by the chart.
PS: Using the annotations feature, you can dynamically position texts labels, images, and other such items arounds your process diagram.

Cocos2d: graphics tool

I started to learn Cocos2d to develop games and also Box2d; I read some tutorials and I seen that are used two couples of tool "LevelHelper-SpriteHelper" & "PhysicsEditor-TexturePacker".
I noticed that LevelHelper-SpriteHelper are more "simply" and organize levels and physics objects very well.
While with PhysicsEditor-TexturePacker I noticed some difficulties where the approach is not very clear.
So what are the best tools between "LevelHelper-SpriteHelper" & "PhysicsEditor-TexturePacker"?
And what are the differences? Can you explain me? thanks
This should answer your questions: http://abitofcode.com/2012/07/cocos2d-useful-tools/
Physics editor is a program that you use to create a tracing around a sprite that isn't a simple polygon. For example it could trace an image of a car, so that when you went to detect a collision between your car and another object with a physics engine (something like box2d) it registers a collision just with the car and not a square surrounding the car. Here is a picture that shows you what it does: http://www.codeandweb.com/physicseditor/features.
Texture-packer is used to put all your sprites that you use in your game into one spritesheet. This allows you to minimize the amount of memory that all of your sprites take up.
http://www.codeandweb.com/texturepacker That picture shows you what it does. Instead of having to add all your individual sprite images to your game you put them all on a spritesheet, which trims the space around each image and puts it into a file size that cocos2d and the iphone can work with.
This is helpful because cocos2d only takes images that have dimensions to the power of two. (2,4,8,16....) If you had a sprite that was 50x50, it would actually take up 64x64 amount of space in your game.
Here is a tutorial that explains most of that better than i did: http://www.raywenderlich.com/2361/how-to-create-and-optimize-sprite-sheets-in-cocos2d-with-texture-packer-and-pixel-formats
And here is project where both are used: http://www.raywenderlich.com/7261/monkey-jump
And here is one with levelhelper and spritehelper: http://www.raywenderlich.com/6929/how-to-make-a-game-like-jetpack-joyride-using-levelhelper-spritehelper-part-1
For a list of more tools go here"
http://www.learn-cocos2d.com/2011/06/complete-list-cocos2d-tools/
SpriteHelper is essentially the same tool as TexturePacker. Both create a single large texture from individual images.
LevelHelper is an editing tool to design your game visually. It also allows editing of the physics world.
PhysicsEditor is a tool to create the (collision) shapes of physics bodies from images. No more, no less.

Resources