I'm creating a drawing iOS application, and in need of smoothing the lines being drawn by user.
I'm using multisampling as normal.
For each time a user moves their finger, the code is like this :
Create points to make a line and then draw these points to a sampling buffer.
Resolve the sampling buffer.
The result buffer is drawn to the canvas.
The problem is when user have a big canvas (e.g: 2048x2048), the resolve process takes quite a time that it's causing the drawing to lag/choppy. The resolve process will resolve all the pixels in the buffer, regardless whether that pixels needs to be resolved or not.
I saw a drawing app like Procreate, and it draws smoothly with no lag even for a big canvas.
So, it is possible, I just don't know how to do that.
Does anybody have an idea for solution?
Thanks.
Just in case someone has the same problem with me, I found a decent solution :
Create a smaller sampling FBO just for the purpose of drawing the lines from last point to the current point. I use a 256x256 buffer.
When drawing from the last point to the current point, use this sampling buffer and then resolve.
Draw this sampling buffer to the current layer.
The result is not bad, no more lag. The only problem is that setting the appropriate transform, matrix, etc is quite hard.
Related
I have a 2D game in 4 directions, and I'm having problems with FPS (or GPU) because I have to draw a lot of textures.
I've read a lot about techniques to optimize performance, but I don't know what I can do anymore.
The main problem is that in some occasions I have about 200 creatures, where I have to draw his body (it is a single sprite) but also draw spells and other animations on his body. So, I think that is when it starts to give conflicts because the loop where I draw each creature, must change the textures for each creature, that is body>animation1>animation2>animation3 and this about 200 (creatures) times at 60 fps. Which lowers the fps to about 40-50.
Any suggestions?
This is how it looks:
The issue is probably - as you have already suggested - the constant switching between different textures. This is much slower than drawing the same number of sprites with the same texture.
To change this, consider putting all your textures into a single big texture. You then always draw that texture. This obviously would look quite wrong, so you also have to tell XNA which part of the texture you want to draw. For that, you can use the SourceRectangle parameter that can be passed to SpriteBatch.Draw(...). That way, you can always render the same texture but can still have different images on screen.
See also this answer about texture atlasses for more details.
I've written an app with an object detection model and process images when an object is detected. The problem I'm running into is when an object is detected with 99% confidence but the frame I'm processing is very blurry.
I've considered analyzing the frame and attempting to detect blurriness or detecting device movement and not analyzing frames when the device is moving a lot.
Do you have any other suggestions to only process un-blurry photos or solutions other than the ones I've proposed? Thanks
You might have issues detecting "movement" when for instance driving in car. In that case looking at something inside your car is not considered as movement while looking at something outside is (if it's not far away anyway). There can be many other cases for this.
I would start by checking if camera is in focus. It is not the same as checking if frame is blurry but it might be very close.
The other option I can think of is simply checking 2 or more sequential frames and see if they are relatively the same. To do something like that it is bast to define a grid for instance 16x16 on which you evaluate similar values. You would need to mipmap your photos which manually means resizing it by half till you get to 16x16 image (2000x1500 would become 1024x1024 -> 512x512 -> 256x256 ...). Then grab those 16x16 pixels and store them. Once you have enough frames (at least 2) you can start comparing these values. GPU is perfect for resizing but those 16x16 values are probably best evaluated on the CPU. What you need to do is basically find an average pixel difference in 2 sequential 16x16 buffers. Then use that to evaluate if detection should be enabled.
This procedure may still not be perfect but it should be relatively feasible from performance perspective. There may be some shortcuts as some tools maybe already do resizing so that you don't need to "halve" them manually. From theoretical perspective you are creating sectors and compute their average color. If all the sectors have almost same color between 2 or more frames there is a high chance the camera did not move in that time much and the image should not be blurry from movement. Still if camera is not in focus you can have multiple sequential frames that are exactly the same but in fact they are all blurry. Same happens if you detect phone movement.
I've tried to make an "overlay" effect in a 3d scene. After drawing stuff to the buffer, i tried to draw a full screen quad with blending enabled and the depth test disabled. On some android devices this seems to have caused a slow down.
I found this link:
The particularly slow point is the point where the drawing of a pixel needs to check what the color behind it was.
So instead of drawing a single full screen quad, i divided it up in tiles, and rendered with multiple draw calls, which seems to have caused some gain.
What may be happening here and how can this be profiled with webgl i.e. how does one come to the conclusion from the quote above?
I guess that to profile it, you simply have to test with several blending function, with or without blending enabled, etc...
Blending is not a trivial operation, and indeed we can assume that blending function which need to read pixel on buffer could induce performance lose, like all "reading" operation in OpenGL, because this can block the pipeline. I guess most of modern desktop GPU have some specific design to optimize this, but on mobile phones, this is maybe more problematic.
Anyway, if you are about to draw a full screen quad, why don't you render your quad directly using two source texture, which you blend directly in the fragment shader using a custom equation ? this way, you don't need to use blending and you avoid any back buffer reading problem.
Hi there, the effect I want to implement is burning out user's signature. I've done the signature drawing with quartz2D. Can any one show me a direction for drawing the burning glow effect? thanks!
The glow is caused by light streaming from a source through the strokes and illuminating particles in the air as it travels.
So a brute-force solution that works when viewed directly from the front is to draw the plane several times with additive transparency. You'll want to move and scale the plane for each draw so that you're tracing out the shape of a frustum.
You'll need to do so many draws that I can't imagine you'll end up with both real-time performance and an acceptable result. You should be fine if you can spend a second or a half-second or so on preparing the image on e though.
The most obvious alternative would be to work backwards, writing a shader that traces back through the frustum, sampling the 2d texture appropriately. That's likely to cost a similar amount because texture sampling will be the bottleneck due to memory bandwidth (make sure you upload as a one-channel texture in any event), but could be done so as to work from any angle.
I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.
The depth buffer is not mandatory. In a 2D game, for example, there is usually no need for it.
You need a depth buffer if you want objects to appear behind each other, but still want to be able to draw them in arbitrary order.
If you draw all triangles from the back to the front, and none of them intersect, then you could do without the depth buffer. However, it's generally easier to do away with depth sorting and just to use the depth buffer anyway.
Depth buffers are not mandatory. They simply solve the following problem: suppose you have an object near to the camera which is drawn first. Then, after that is already drawn, you want to draw an object which is far away, but at the same position as the nearby object on-screen. Without depth buffers, it gets drawn on top, which looks wrong. With depth buffers, it is obscured, because the GPU figures out its behind something else that has already been drawn.
You can turn them off and deal with that, eg. by drawing back-to-front (but this has other problems solved by depth buffering), which is easy in 2D games. Alternatively for some reason you might want that over-draw as some kind of effect. But it is by no means necessary for basic rendering.