Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm stuck on a problem and needed some help or guide for a possible solution.
Basically in my application there will be a map with several zones.
The user can select any of these areas, at that time this area is filled with a color.
Imagine a map like this one, so i need to be able to change the color of only one country.
Something like what happens in the books of paintings (https://itunes.apple.com/pt/app/colorfly-best-coloring-book/id1020187921?mt=8), or Paint Bucket command in the Photoshop .
Any idea how to get something like this on iOS ?
Thanks in advance
The paint bucket technique you're looking for is a set of graphics algorithms usually called "flood fill". There are different approaches to the implementation depending on the circumstances and performance needs. (There is more at that wikipedia link.)
I have no experience with it, but here is a library from GitHub that purports to implement this for iOS given a UIImage object: https://github.com/Chintan-Dave/UIImageScanlineFloodfill
Re: your question about doing this without user touch: yes, you'll want to keep a map of countries to (x,y) points so you can re-flood countries when required. That said, the intricacies of the county borders might make an algorithmic fill inexact without more careful normalization of the original source. If your overall map only contains a small set of possible states, there are other ways of achieving this goal, like keeping a complete set of possible images (created in ie Photoshop) and switching them out, or keeping a set of per-country "overlay" images that you swap in as needed. (But if the flood fill is accurate on that source image, and performant for your needs, then great.)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm going to be more specific about the situation:
I've captured a screenshot from the game DotA. The information I want to get is what objects eg. heroes (also its name, hp, ...), creeps (also which side), towers, etc. is visible in the image and where they are. A problem come from the fact that in DotA 2 many of these object can be viewed from many perspective, so let's reduce the problem and assume that every object have only one orientation. How might this problem be solved quickly enough, that it can recognise all objects in real time at about 30fps? Any help or suggestions is welcome.
I think that you have the good flags: CNN for image segmentation. So my point is that for so many different objects from different points of view and scale (because I guess that you can zoom in/out on your heroes/objects), the easiest way (but the heaviest in term of computation) is to build one CNN for each type of object.
But images would help a lot to get a better understanding of the problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to dynamically create realistic clouds including their movement and multiplication (density). I found this link which seem to explain more than I can understand. I cannot find a way to replicate the clouds and there is practically no other tutorial online. Does anyone know how can I achieve this result? Thank you in advance!
use scene kit you can make a dynamic particle effect:
https://developer.apple.com/library/prerelease/ios/samplecode/SceneKitVehicle/Introduction/Intro.html
in this example you can find, after the vehicles code, the smoke animation. You can gat all the stuff you need from there, like smoke.scnp, smoke.png.
You need to change some row of code like the smoke color, the light color and the camera position.
But with that i guess you can have one of the best result possible, as real time cloud simulation, in iOS.
(other solution would be using openGL but trust me this is a ton easier)
maybe this guide can also be useful to understand the basis (you just need them to achieve your clouds) :
http://www.raywenderlich.com/83748/beginning-scene-kit-tutorial
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to develop a virtual fit room app in Microsoft kinectSDK. I want to show the dress on the skeletons.
Can anyone tell me which of the following item is better one.
1)Draw the whole Dress on user skeleton
2)Draw the texture on the each and every joints of the skeletons
I try to do with the 1st option but I want show the dress or alter the dress if the user turns right or left side.
Can anyone help in displaying the cloth on user skeleton when he turns too. So if the user turns right or left the cloth should get aligned. Is this possible to do by normal jpeg image? Or have to create any other special type of images(not sure anykind of 3D images).
Regards,
Jayakumar Natarajan
To do what you want, you need to render a skinned, skeletally animated 3D model that can attach different parts corresponding to clothing items, similar to what the XBox Live avatar does.
For flexible clothing that needs to billow/react to movement, you will have to use some sort of cloth physics to move that little bit around properly.
It is impossible to explain all the necessary concepts here. You will probably have to work your way through displaying a skinned model and animating based on the Kinect skeleton - to attaching different meshes based on the clothing outline (and possibly changing the material to enable color/material variations) - to adding elements that can flex/behave realistically.
Using XNA is definitely the best answer. There's a very good example given in Microsoft Kinect Developer Toolkit named "Aveteering XNA". Have a look at it.
Also, if you need a skeleton to skin 3d modeled clothes, you can try the skeleton which comes with the model (dude.FBX) in that Sample application. You can download the Kinect Toolkit here : http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm not sure if this is the best forum for this, because it's not a programming question per se, but here goes.
I am the developer for an iOS application, and we contracted the design out to a third-party. They delivered to us a massive PhotoShop file with all of the the individual pieces of artwork done on individual layers, at double resolution. To get the artwork into XCode, my workflow is as follows:
Show only the layers containing a particular unit of artwork
Select all
Copy Merged
Create New image (fortunately, the dimensions are taken care of automatically)
Paste
Deselect pasted layer and delete Background, to preserve transparency
Save image as x.psd
Save copy as x#2x.png
Set image size to 50% of original dimensions
Save copy as x.png
Discard changes
This app is pretty large, so it's quite tedious to do this process for every little image. I'm not very Photoshop savvy, so I'm wondering if there is a better way. It seems to me that it should be easy enough to combine steps 3-11 into one macro or script or something. The only thing that changes in each iteration over these steps is the output name. Any suggestions?
Normal workflow is exactly as you described. You can write a Photoshop script to do the layer exporting and Apple provides an Automator tool that will allow you to resize those graphics from 2x down 50%. Great tutorial here. This can help get your graphics to scale quickly.
There are solutions to automate what your trying to accomplish. This video tutorial allows you to take your PSD or PNG and port it into an Xcode with all of the layers properly placed in a view for you, create view controllers, and segues.
Disclaimer - I am associated with the JUMPSTART Platform as mentioned in the video.
You can script Photoshop with Javascript and I've written scripts in the past to perform similar series of steps, it wasn't too hard to figure out even for someone like me who'd never written any Javascript before. Photoshop also has 'Actions' which are like macros and you can probably do something simple like this with Actions as well but it's not something I've personally tried. Check out the Adobe docs on scripting Photshop: Adobe Photoshop Scripting.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to convert a powerpoint presentation into a series of images. One per slide specifically so they can be uploaded as an image gallery to a blog. Does anyone know of any libraries that can convert a .ppt into images. Any language is fine as long as it can run on a *nix server, so no C# or .Net dependent libraries.
Even if one existed, and I would guess one does, it can't address animations in a meaningful way. I see more and more ppt presentations making good use of animations to get their points across. Many of these animations will overlap one another. How could such a slide be turned into a single image? How would you prioritize which animated image segments should overlap the others? You may want to keep an eye on this thread: Converting ppt to png using Apache poi
This might get you a solution via php How to work with powerpoint in php?
This might work with python: https://stackoverflow.com/q/4995877/657003
This is a good discussion of the problem you face: http://www.joelonsoftware.com/items/2008/02/19.html