I am drawing, or should I say "stamping", an image using the CGContextDrawImage method in Objective C. The image gets drawn to points that are determined by touch movements. Basically I'm stamping an image to create a "brush" effect. Looks something like this:
I am happy with the results, however when the touch moment slows down the image gets drawn on top of its self and ruins the alpha value I want. Is there a blend technique in which the opacity of the image would not stack on top of each other? Or should I just look at changing my points such that they are not so close together when the movement slows down?
Thanks in advance.
Related
so I'm making a game and pretty much when the player (which is a triangular shaped rocket) hits an object flying at you (a rock) the game ends. I have everything working well but my problem is the rocket is a triangle yet the image view its in is a rectangle. So if the edge of the image view touches the rock the game will end even though the actual rocket didn't touch the object. So basically how can I make the rock image view not recognize the parts of the rocket image view which are empty? Basically a triangular shaped image view.
Thank you for your help. Let me know if you need more info or want to see the code I have for them to collide.
You analytically present the triangle with 3 points and a rock with a center and radius then find and implement an algorithm checking a hit test between those 2 shapes. Or draw the two shapes onto some graphics context using an appropriate blending and check for overlapping pixels (for instance draw one as red and another as green and look if a pixel that is both red and green exists) you could actually do that with 2 image views having those colors and .5f alpha added on the 3rd invisible view but you would need to get the image from the view and then iterate through all the pixels. In any of the cases do this check only after the corresponding view frames overlap.
I have been searching from last two days on internet, I have checked many source codes on net but none of them has provided the result I want.
The image rotation would have perspective but still there would be no changes in the heights of both left and right sides of an image.
I want to set image inside the laptop screen
Please help me out, Thanks.
So you want to 2D pespective drawing of a laptop screen (on an iOS device?) and put a 2D image on that screen, but with the image transformed so its perspective looks correct on the laptop screen, right?
What you need to do is to add an image view on top of your laptop image view. Lets call it laptopScreenImageView.
Then apply a CATransform3D to that the laptopScreenImageView's layer.
The trick to get 3D perspective out of a CALayer is to modify the .m34 value of the transform. Typically you set the .m34 value to a very small negative number, somewhere around -1/200 to -1/500 (the denominator in the fraction is the z coordinate of the "eye position" for viewing the perspective image, in pixels, or how many pixels "above" the image the viewer's eye should seem to be. I don't fully understand it, to be honest. I fiddle with the .m34 value until I get something that looks right.)
Alternately you could try adding a CATransformLayer to your laptop image view's layer, and then adding a CALayer containing your image as a sublayer of the CATransformLayer. I haven't used CATransformLayers before, but the docs say they are supposed to support layers with 3D perspective, giving you the same effect as modifying the .m34 component of a layer's transform.
I have an Image of a landscape which i need to fill with different colors.
When i select colors from palette and start scrubbing on any particular part, only that part should get the color even if by mistake i take my finger outside of that image part.
So basically i need to detect which part of image have i tapped so that only that part takes the color.
I am developing this app in Cocos2dx, but any help in logic would be a good point to start.
Here is an example of what i want.
Note : I know i could achieve this by taking separate images and then detecting touches, but that increases the app size by alot of MB's.
I guess user will be able to draw only on white part of the image.
If above is true, what i want you to do is, in your touchesMoved method, check if any black color (non white) pixel is present between previous touch point and current touch point.
If there is no such black pixel, then draw it else dont draw it.
I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];
So I am trying to get a very basic "flashlight"-style thing going in one of my games.
The way I was getting it to work, was having a layer on top of my game screen, and this layer would draw a black rectangle with ~ 80% opacity, creating the look of darkness on top of my game scene.
ccDrawSolidRect(ccp(0,0), ccp(480,320), ccc4f(0, 0, 0, 0.8));
What I want to do is draw this rectangle EVERYWHERE on the screen, except for around a cone of vision that will represent the "light source".
What this would create would be a dark overlay on top of everything except for the light, giving it the illusion of a torch/light/flashlight.
The only way I can foresee this happening is by using ccDrawSolidPoly(), but since the position of the light source changes, so would the vertices for the poly.
Any suggestions on how to achieve this would be great.
You can use ccDrawSolidPoly() and avoid having to manually update vertices. For this you can create a new subclass of CCNode representing your light object, and do your custom shape drawing in its -(void)draw method.
The ccDraw...() functions will draw relative to the local sprite coordinates, so you can then move and rotate your new sprite to suit your needs and cocos2d will do the vertices transformations for you.
Update: I found out that you might be better off subclassing CCDrawNode instead of CCNode, as it has some facilities for raw OpenGL drawing (OpenGL's vertexArrayBuffer and vertexBufferObject internal variables and a buffer for vertices, their colors and their texCoords). If your stuff is very simple, maybe subclassing the plain CCNode is enough.
Could a png be used instead as a mask, as the layer above
Like that binocular vision you sometimes see in cartoons?
Or a filter similar to a photoshop mask that darkens as it grows outwardly to wards the edge of the screen
Just a thought anyway...
A picture of more of what your trying to explain might be good too