I tried using inkscape like gazebo recommends on their website however this did not import into gazebo when I tried and crashed the program a few moments later. I am simply tryng to import a 3x3 grid that I can display on a table in Gazebo
Solution was to disconnect the squares so they appeared as holes in a larger square
Also, one of possible solutions is to add your image on the surface of a .dae mesh.
Related
I have a problem with 3D model in SceneKit. I also use maxstARobjec After updating to iOS 12 it's start looking brighter than before (SCREENSHOTS WITH THE SAME ISSUE HERE)
I use 2 light node, without autoenablesDefaultLighting.
And one more fact about this issue: when I hide all lights the model (which must be black) is grey like it's has extra light...
Sorry for my bad language, and I very need your help!
I solve the same issue by convert .obj file to .scn files in Xcode, and use this scenes as nodes. Editor -> Convert to SceneKit file format (.scn)
screenshot
I am making an augmented reality App to demonstrate the options in MacBook and I used the Vuforia SDK.
Here is my problem:
1) I tried with Vuforia Sample Core Feature and I used Image Targets. In Image targets it gives only one image at a time. I attached the output in below mentioned image.
2) My expectation is to show multiple text or image while capturing the real MacBook like below mentioned image.
Please guide me to achieve this.
Vuforia iOS SDK is using OpenGL ES loading 3D object,which's unfriendly to use.
You can use Scenekit, put your objects in a scene, set a rectangle node, put the model in four corners of a rectangle.When ImageTrack is success, load your scene.
How to use SceneKit in Vuforia? check this:https://github.com/yshrkt/VuforiaSampleSwift
Example of tileset:
http://www.rpg-studio.org/wiki/images/9/92/Tileset.png
How to import these images into this grid in Xcode?
https://koenig-media.raywenderlich.com/uploads/2016/06/AdjacencyTileGrid.png
The problem is Xcode doesn't understand that there is a lot of subimages inside parent image.
I've already saw a lot of examples which use tiled map editor but it has its own format and you can't design such levels in Xcode's visual editor. So they are not appropriate for me.
I also saw that people always avoid to use tilesets - they somewhere get a lot of separate images instead and doesn't describe what to do with a single big tileset.
The simplest solution might be to just start with individual images that can feed into Xcode’s image handling pipeline.
My understanding of the Tilesets you’ve described is they are produced from individual images with a tool like TexturePacker which is then consumed by the Tiled Map Editor. The tmx maps produced by the Tiled Map Editor are consumed in Xcode using SKTiled for Swift or JSTileMap for Objective-C.
I am trying to make an app for image recognition with Open CV, i want to implement something like this but i don't know how should i do it can any one give me any help where should i begin from i have downloaded Opencv for iOS from here,
I have a hardcopy of image as an example which i want to scan through the camera and the images(markers) i have imported in project now when i scan the image through camera then it should overlay the markers on the image and when i tap/select the marker it should show the info of that marker.
Here is my image :
It's just an example i have taken (Square,Circle and Triangle as Markers)
So now when the image is scanned then the markers will come up as an overlay and on clicking the markers i should get the names (If the Overlay image over the Circle Named "Air" is tapped it should show me "Air" on an alert or if Square Named "Tiger" is tapped it should say "Tiger")
My problem is that the images are kind of same pattern but the result is different on every part so i don't know how should i approach in this ..
Please can any one help me out by suggesting any idea or if any one has done thing like this please tell me how should i implement it.
I have to start from scratch any help please .
Can this be achieved using Open CV or i have to use any other SDK such as vuforia or layar.
Maybe you should search a little bit before asking help...
Anyway, the shapes you want to find do not seems to change (scale, rotation) so, you can look at the template matching methods implemented in OpenCV (see Tutorial OpenCV)
If the shapes are changing, you should look at more powerful methods such as SIFT or SURF. Both are already implemented in OpenCV (the link from aishack is a tutorial to re-implement SIFT, you can find in the same website a tutorial to use the OpenCV method).
I have made a model using Sketchup, and have tested rendering it using Blender and it looks great.
However loading it in XNA has two problems.
1. One of the textures becomes see-thru not entierly transparent but items below on the inside of the model is visible (this is not the case in blender).
2. I have a rounded part on the model that is divided into smaller parts and the texture gets out of sync (the posisioning is all wrong).
I have tested exporting the model to 3ds and then use blender to save it as fbx (to eliminate any problems with Sketchup).
I have also tried using AutoDesks FBX Converter, same problems =(
I'm using myModel.Draw(World, View, Projection); to render the model.
Any suggestions?
/Jimmy
1)Sounds like a backface culling issue try this
device.RenderState.CullMode = CullMode.None; (try the CW and CCW variants)
also make sure the depth buffer is enabled
2) This may be similar problem to an issue I had with blender when you copy the bones try the gModel.CopyBoneTransformsTo(transforms); as well as gModel.CopyAbsoluteBoneTransformsTo(transforms);