Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to develop a virtual fit room app in Microsoft kinectSDK. I want to show the dress on the skeletons.
Can anyone tell me which of the following item is better one.
1)Draw the whole Dress on user skeleton
2)Draw the texture on the each and every joints of the skeletons
I try to do with the 1st option but I want show the dress or alter the dress if the user turns right or left side.
Can anyone help in displaying the cloth on user skeleton when he turns too. So if the user turns right or left the cloth should get aligned. Is this possible to do by normal jpeg image? Or have to create any other special type of images(not sure anykind of 3D images).
Regards,
Jayakumar Natarajan
To do what you want, you need to render a skinned, skeletally animated 3D model that can attach different parts corresponding to clothing items, similar to what the XBox Live avatar does.
For flexible clothing that needs to billow/react to movement, you will have to use some sort of cloth physics to move that little bit around properly.
It is impossible to explain all the necessary concepts here. You will probably have to work your way through displaying a skinned model and animating based on the Kinect skeleton - to attaching different meshes based on the clothing outline (and possibly changing the material to enable color/material variations) - to adding elements that can flex/behave realistically.
Using XNA is definitely the best answer. There's a very good example given in Microsoft Kinect Developer Toolkit named "Aveteering XNA". Have a look at it.
Also, if you need a skeleton to skin 3d modeled clothes, you can try the skeleton which comes with the model (dude.FBX) in that Sample application. You can download the Kinect Toolkit here : http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm going to be more specific about the situation:
I've captured a screenshot from the game DotA. The information I want to get is what objects eg. heroes (also its name, hp, ...), creeps (also which side), towers, etc. is visible in the image and where they are. A problem come from the fact that in DotA 2 many of these object can be viewed from many perspective, so let's reduce the problem and assume that every object have only one orientation. How might this problem be solved quickly enough, that it can recognise all objects in real time at about 30fps? Any help or suggestions is welcome.
I think that you have the good flags: CNN for image segmentation. So my point is that for so many different objects from different points of view and scale (because I guess that you can zoom in/out on your heroes/objects), the easiest way (but the heaviest in term of computation) is to build one CNN for each type of object.
But images would help a lot to get a better understanding of the problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm stuck on a problem and needed some help or guide for a possible solution.
Basically in my application there will be a map with several zones.
The user can select any of these areas, at that time this area is filled with a color.
Imagine a map like this one, so i need to be able to change the color of only one country.
Something like what happens in the books of paintings (https://itunes.apple.com/pt/app/colorfly-best-coloring-book/id1020187921?mt=8), or Paint Bucket command in the Photoshop .
Any idea how to get something like this on iOS ?
Thanks in advance
The paint bucket technique you're looking for is a set of graphics algorithms usually called "flood fill". There are different approaches to the implementation depending on the circumstances and performance needs. (There is more at that wikipedia link.)
I have no experience with it, but here is a library from GitHub that purports to implement this for iOS given a UIImage object: https://github.com/Chintan-Dave/UIImageScanlineFloodfill
Re: your question about doing this without user touch: yes, you'll want to keep a map of countries to (x,y) points so you can re-flood countries when required. That said, the intricacies of the county borders might make an algorithmic fill inexact without more careful normalization of the original source. If your overall map only contains a small set of possible states, there are other ways of achieving this goal, like keeping a complete set of possible images (created in ie Photoshop) and switching them out, or keeping a set of per-country "overlay" images that you swap in as needed. (But if the flood fill is accurate on that source image, and performant for your needs, then great.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for university I'm working on a project in which I have to teach a robot(Nao-robot) play nine men's morris. Unfortunately I'm fairly new to the area of robotics and I need some tips how to solve some problems. Currently I'm working on the localization/orientation of the robot and I'm wondering which approach of localization would fit best in my project.
A short explanation of the project:
The robot has a fixed starting position and has to walk around on a boardwhich has a size of about 3x3 meter ( I will post a picture of the board when i reach 10 reputation). There are no obstacles on the field except the game tokens and the game lines are marked yellow on the board. For orientation I use the two camera devices the robot has.
I found some approaches like
Monte Carlo Localization
SLAM (Simultaneous Localization and Mapping)
but these approaches seem to be quite complex for a beginner like me and I would really appreciate if some has some good ideas what would be a simpler way to solve this problem. Functionality has for me a far higher priority than performance.
I have vague knowledge about the nine men's morris game as such, but I will try to give you my simpler idea.
First thing first, you need to have a map of your board. This should be easy in your case, cause your environment is static. There are few technique to do this mapping from your board. For your case I would suggest to have a metric map, which is an occupancy grid. Assign coordinates to each cell in the grid. This will be helpful in robot navigation.
As you have mentioned, your robot starts from a fixed position. On start up, initialize your robot with this reference location and orientation (with respect to X-Y axes of the grid, may be you don't need the cameras, I am not sure!!). By initialization I mean, mark your position on the grid.
Use Dead Reckoning for localization and keep updating position and orientation of your robot as it move through the board. I would hope that your robot get some feedback from the servos, like number of rotations and so forth. Do that math and update the position coordinates of your robot as it move into different cell in the grid.
You can use A-Star algorithm to find a path for your robot. You need to do the path planning before you want to navigate. You also have to mark those game tokens on the grid, to avoid collisions in planning the path.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I´m building a game that reads a bidimensional array so that it creates the map, but the walls are all separated from the corners and floors, each wall, each corner and each floor is an individual image, and this is consuming a lot of CPU, but I really want to create a random feeling of the map, and that´s why I´m using an image for each corner and wall.
I was thinking that maybe I could generate a texture built by merging 2 or more different textures, to enhance performance.
Does anyone know how that I could do that? Or maybe another solution? Does converting the images to PVR would make any difference?
Thanks
For starters, you should use a texture atlas, created with a tool like TexturePacker, grouping as much of your 'images' onto a single atlas. Basically load it once and create as many sprites from it as you want without having to reload. Using PVR will speed up the load, and benefit your bundle size.
Secondly, especially for the map background, you should use a CCSpriteBatchNode that you init with the above sprite sheet. Then, when you create a tile, just create the sprite and add it to the batch node. Add the batch node to your scene. The benefit of this is that regardless of the number of sprites (tiles) contained in the batch node, this will all be drawn in a single GL call. Now, that is where you will gain the most benefit from a performance standpoint.
Finally, dont rely on the FPS information when running with the simulator. The simulator does not make use of the host's GPU, and its performance is well (much) below what you get on a device. So before posting a question about performance, make certain you measure on a device.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm not sure if this is the best forum for this, because it's not a programming question per se, but here goes.
I am the developer for an iOS application, and we contracted the design out to a third-party. They delivered to us a massive PhotoShop file with all of the the individual pieces of artwork done on individual layers, at double resolution. To get the artwork into XCode, my workflow is as follows:
Show only the layers containing a particular unit of artwork
Select all
Copy Merged
Create New image (fortunately, the dimensions are taken care of automatically)
Paste
Deselect pasted layer and delete Background, to preserve transparency
Save image as x.psd
Save copy as x#2x.png
Set image size to 50% of original dimensions
Save copy as x.png
Discard changes
This app is pretty large, so it's quite tedious to do this process for every little image. I'm not very Photoshop savvy, so I'm wondering if there is a better way. It seems to me that it should be easy enough to combine steps 3-11 into one macro or script or something. The only thing that changes in each iteration over these steps is the output name. Any suggestions?
Normal workflow is exactly as you described. You can write a Photoshop script to do the layer exporting and Apple provides an Automator tool that will allow you to resize those graphics from 2x down 50%. Great tutorial here. This can help get your graphics to scale quickly.
There are solutions to automate what your trying to accomplish. This video tutorial allows you to take your PSD or PNG and port it into an Xcode with all of the layers properly placed in a view for you, create view controllers, and segues.
Disclaimer - I am associated with the JUMPSTART Platform as mentioned in the video.
You can script Photoshop with Javascript and I've written scripts in the past to perform similar series of steps, it wasn't too hard to figure out even for someone like me who'd never written any Javascript before. Photoshop also has 'Actions' which are like macros and you can probably do something simple like this with Actions as well but it's not something I've personally tried. Check out the Adobe docs on scripting Photshop: Adobe Photoshop Scripting.