XNA beginner question about draw method - xna

I understand that I have to draw everything in draw(), and it's looping continuously.
But I don't want to draw texture again and again, for example I want to create a texture, draw something to texture (not spritebatch). than I will only draw that texture in draw().
Is it possible?
What can I use?

You have to draw again and again, as in short, if you dont it wont show. A wise man once wrote in a windows development book
Ask not why the text on your windows has to be constantly drawn, ask why it never used to be in DOS/Unix command line.
If something is placed over the area you're drawing too, and you dont redraw it, it just simply wont be there. You need to keep drawing it for it to be sustained on screen. Its done very quickly and wont hurt anything (especially if you're thinking in terms of background)

Not drawing it again is a performance optimisation. You should only do that if you really need to.
If you do need to do this, create a render target, draw your scene to the render target, and then draw your render target to the screen each frame (using SpriteBatch makes this easy) instead of your scene.
Take a look at this question about caching drawing using render targets.

Related

Moving point with mouse

I drew a lot points in my program with webgl. Now I want to pick any point and move this point new position. The case is I don't know how to select point. So am I supposed to add actionlistener to each point?
WebGL is a rasterization library. It has no concept of movable, clickable position or points. It just draws pixels where you ask it to.
If you want to move things it's up to you to make your own data, use that data to decide if the mouse was clicked on something, update the data to reflect how the mouse changed it, and finally use WebGL to re-render something based on the data.
Notice none of those steps except the last one involve WebGL. WebGL has no concept of an actionlistener since WebGL has no actions you could listen to. It just draws pixels based on what you ask it to do. That's it. Everything else is up to you and outside the scope of WebGL.
Maybe you're using some library like three.js or X3D or Unity3d but in that case your question would be about that specific library as all input/mouse/object position related issues would be specific to that library (because again, WebGL just draws pixels)

Where to call SetRenderTarget?

I'd like to change my RenderTargets between SpriteBatch.Begin and SpriteBatch.End. I already know this works:
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Begin()
SpriteBatch.Draw(...)
SpriteBatch.End()
GraphicsDevice.SetRenderTarget(target2);
SpriteBatch.Begin()
Spritebatch.Draw(...)
SPriteBatch.End()
But I'd really like to make this work:
SpriteBatch.Begin()
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Draw(...)
GraphicsDevice.SetRenderTarget(target2);
Spritebatch.Draw(...)
SpriteBatch.End()
I've ever seen anybody doing this, but I didn't find any reason why.
EDIT: a little more about why I want to do this:
In my project, I use SpriteSortMode.Immediate (to be able to change the BlendState when I want), and I simply iterate through a sorted list of sprites, and draw them all.
But now I want to apply mutli passes shader on some sprites, but not all! I'm quite new to shaders, but from what I understood, I have to draw my sprite on an intermediate one using first pass, and the draw the intermediate sprite on the final render target using the second pass. (I'm using a gaussian blur pixel shader).
That's why I'd like to draw on the target I want, using the desired shader, without having to make a new begin/end.
The question is: Why do you want to change the render target there?
You won't have any performance improvements, since the batch would have to be split anyways when the render target (or any other render state) changes.
SpriteBatch tries to group the sprites by common attributes, for example the texture when SpriteSortMode.Texture is used. That means sprites sharing a texture will be drawn in the same draw call (batch). Having less batches can improve performance. But you can't change the GPU state during a draw call. So when you change the render target you are bound to use two draw calls anyways.
Ergo, even if the second example would work, the number of batches would be the same.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

Making a Drawable layer in cocos2d

I want to make sort of a chalk board for part of my app, and I was wondering how to accomplish this?
I was thinking I could create a sprite and have it's image set to something very small (maybe a small point), and then add a new instance of that sprite everywhere the user touches to simulate a draw event. Something like [self addChild:someSprite]; for each touch location.
But it seems like that would be extremely memory inefficient. There has to be a better way than that, Maybe drawing actual lines? I'm probably overlooking some method.
Thanks for any help.
You need to use CCRenderTexture for chalk board paintings. Check this article & project for a drawing example.
Your variant isn't such "memory inefficient" as you think. No matter how much sprites will you create with the same texture, your texture will be placed to the memory only once. And all the sprites will use pointer to it. Just one thing to prevent many unnessesary calls is to use CCBatchNode. It will draw all it's children with single draw call. Without using it, draw will be called on every children.

How do I go about implementing sprite masking?

Using DirectX I'm rendering textured polygons (orthographically) so they act as HUD sprites. Now I'm not sure how would I go about implementing sprite masking in this sytem?
So basically say I have a sprite, how can I make it render only in a given portion of the screen which I define? And if a part of it moves outside this portion of the screen you don't see it?
Scissor Test.
http://msdn.microsoft.com/en-us/library/ee422196(VS.85).aspx
You're looking for what is called a viewport. considering you did not specify which DirectX and which language you're using, I'll have to point to the DirectX9 spec

Resources