I am trying to composite the following images to produce an image that would demonstrate how the following swimwear would look in the attached patter material such that realism is key. While a strong java programmer with a CS background, and i have experience in other languages, like python, I have no idea where to start, I looked at JavaCV but it is just so complex and has so many functions i would not know which are relevant here.
Any guidance, or examples here would be greatly appreciated.
Anyway, i'm afraid there's no straightforward solution you can use since this can be a quite large project:
Extract the normal map of the swimsuit. This is needed to stick the pattern to the swimsuit, for example, the pattern should look bulged in the bra compared to other flatter areas.
You need to do texture synthesis (http://en.wikipedia.org/wiki/Texture_synthesis) since you need to resize the pattern to be larger.
Apply the synthesized pattern
For the realism, you also need to extract the shadow from the white swimsuit (if the swimsuits are white this will be easier). After applying the pattern, put back the shadow to the composite image.
You can refer to these papers:
http://maverick.inria.fr/Publications/2009/WOBT09/TextureDraping_EGSR_2009.pdf
and
http://graphics.cs.cmu.edu/projects/nrt/
Related
I am working on a problem where I need to extract the visible branching structure and the foliage from the tree separately. The suggested technique could be fully automatic or semi supervised(where the user draws a few strokes to help in segmentation). I would like to know how this can be implemented which tools or techniques or language would be the most convenient for accomplishing this task.
Not an answer necessarily, but this is too much to fit in a comment. I messed around with a picture of a tree for a few minutes.
Here's my original image:
I tried getting the difference between G and RB to highlight the greener areas using this (in MATLAB):
green_diff = 2*image(:,:,2) - (image(:,:,1)+image(:,:,3));
figure, imshow(green_diff)
I also tried looking at just the H channel in HSV color space.
htest = rgb2hsv(image);
htest(:,:,2:3) = 1;
figure, imshow(hsv2rgb(test))
You don't need to convert it back to rgb—it's just cooler to look at this way.
I don't have any good ideas for the branches right now. The only thing that really comes to mind is trying to take advantage of the fact that branches are connected to leaves, and that branches usually exhibit a tree-like shape (surprising, I know).
Is there a possibility to remove the IR (Infra Red) filter on your camera? It can be done quite cheaply nowadays. If so, you could probably use the fact that the chlorophyl in the foliage reflects IR wavelengths quite strongly and therefore shows up bright in the IR wavelengths.
Try Googling "NDVI" (Normalised Difference Vegetation Index) for further explanation.
Demonstration/Explanation of NDVI
I am interested in to how change hue of the texture in efficient way ? I am experimenting to create space dust which will change it's color every few seconds with nice, smooth transition from one color to another.
I find this possible in few ways:
Using core image like in this example. But I don't know how will this work in combination with Spritekit...
Using particle emitters to create space dust and change color of particles over time using particleColorSequnece property.
And easy one that came up on my mind , while playing with Photoshop, which is using two same, but differently colored images, one over another, and changing the opacity of the topmost one.
This gives me the effect I want, and actually looks fabulous, but is there any better way ? Maybe using SKTexture? In this particular case, I just need to change from one color to another , but what would be an efficient way to do this when multiple changes are required one after another ? This way, my third example requires additional images...
Here is the link which most closely describe what I am trying to accomplish. Just look how space dust changes its color overtime(from dark blue to purple and later to green or orange). I suppose this is done programatically... I would like to ask moderators to remove a link if it is not suitable to post it here. Thanks!
It is kind of a hard questions to answer and is rather subjective, however...
I personally would do the Emitter Node approach, because it seems like it is built for the type of use you are looking for and could have some cool effects trailing behind.
With that being said you specifically asked about changing the hue and colorBlendFactor might be what you are really looking for. I don't have a great link for it, but this might get you pointed in the right direction. You can see how they are blending colors to get the desired result.
Your solution with changing the alpha of two separate colors doesn't sound like a bad approach either.
Hopefully that helps and good luck =)
let's say I want to display a customizable (2D, cartoon-like) character, where some properties e.g. eye color, hair style, clothing etc can be chosen from a predefined set of options. Now I want to animate the character. What's the best way to deal with the customization?
1) For example, I could make a sprite sheet for each combination of properties. That's not very memory efficient and not very flexible, but probably gives the best performance.
2) I could compose the character from various layers, where each property only affects one layer. Thus, I could make a sprite-sheet for the body, a collection of sprite-sheets for the eyes (one for each eye color) etc.
2a) In that case, I could merge the selected sprite-sheets in order to generate a single sprite-sheet containing the animation of the customized character.
2b) Alternatively, I could keep the sprite-sheets separate and try to animate them simultaneously as layers. I fear, that this might become a problem performance-wise.
3) I could try to modify the layers programmatically, e.g. use a sprite-sheet for the eyes as a mask and map some texture on it before merging it down to a single sprite-sheet. I would think this is a very flexible approach when it comes to simple properties like eye colors, but might become difficult for things like hair-style. I am aware that this depends much on the character and probably a general answer is difficult.
I assume that my problem is not new, so there is probably a standard approach to it.
Concerning the platform, I'm particularly interested in iOS and try to avoid OpenGL (well, I'm open-minded). Maybe there is a nice framework that can help me here?
Thanks!
Depending on what your working on, you might want to animate part/all of the animations outside in another tool, such as flash. It is much easier to work with a visual environment.
Then there are tools that take swf files, and create sprite sheets that you would then animate in cocos2d.
That is a common game creation workflow.
You problably want to take a look on how to create sprites at cocos2d.
Cocos2d comes with a set of tools that help you to animate single parts and offers abstractions to compose parts (like CCBatchNode or CCNode). Also, it comes with tools that helps you to pack sprites into sprite sheets (e.g Texture Packer) and develop levels (e.g Level Helper).
Cocos2d is an open source framework and it is widely used. You also have cocos3d but I never used it :).
I have an input as a 3D binary image and the preferred output below:
Input:
Preferred Output:
What image processing methods should I look for if I am to have only the spiky object(s) remain, just like the preferred output above?
Well, first of all i'd try to remove zall those white line and get only the pattern.
You image seem way to erratic for me to work with.
If your pattern is quite regular (as it seems here), you can search for a template and use a correlation to extract interesting parts of the image.
After this, only the 4 big patterns should be left. I would perform some calculations (mainly shape based), some examples here :
http://opencv.willowgarage.com/documentation/cpp/imgproc_structural_analysis_and_shape_descriptors.html)
I'm sure there are some simple descriptors for this. I think of energy, perimeter, these kind of things.
I am new into Image Processing, and I have started simple project for recognizing text on images with complex background. I want to use stroke filter as first step, but I cannot find enough data about stroke filters to implement this.
I have only managed to find pure definition of Stroke Filters. So does anyone know something about Stroke filters that can be helpfull to me for my implementation.
I found one interesting article here, but it doesnt explain Stroke Filters in depth
http://www.google.com/url?sa=t&source=web&ct=res&cd=4&ved=0CBoQFjAD&url=http%3A%2F%2Fwww-video.eecs.berkeley.edu%2FProceedings%2FICIP2006%2Fpdfs%2F0001473.pdf&ei=qtxaS5ubEtHB4gbFj8HqBA&usg=AFQjCNEnXQCMAFnqPRHe2kNZ6JEidR1sQg&sig2=wpaIDIQmNn739aF0cYWbsg
"Stroke Filters" are not a standard idea in image-processing. Instead, this is a term created for the paper that you linked to, where they define what they mean by a stroke filter (sect. 2) and how you can implement one computationally (sect. 4).
Because of this, you are unlikely to find an implementation in a standard toolkit; though you might want to look around for someone who's posted an implementation, or contact the authors.
Basically, though, they are saying that you can identify text in an image based on typical properties of text, in particular, that text has many stroke-like structures with a specific spatial distribution.