How to draw this effect with any technology on iOS - ios

Hi there, the effect I want to implement is burning out user's signature. I've done the signature drawing with quartz2D. Can any one show me a direction for drawing the burning glow effect? thanks!

The glow is caused by light streaming from a source through the strokes and illuminating particles in the air as it travels.
So a brute-force solution that works when viewed directly from the front is to draw the plane several times with additive transparency. You'll want to move and scale the plane for each draw so that you're tracing out the shape of a frustum.
You'll need to do so many draws that I can't imagine you'll end up with both real-time performance and an acceptable result. You should be fine if you can spend a second or a half-second or so on preparing the image on e though.
The most obvious alternative would be to work backwards, writing a shader that traces back through the frustum, sampling the 2d texture appropriately. That's likely to cost a similar amount because texture sampling will be the bottleneck due to memory bandwidth (make sure you upload as a one-channel texture in any event), but could be done so as to work from any angle.

Related

Blurry images during object detection from iOS app

I've written an app with an object detection model and process images when an object is detected. The problem I'm running into is when an object is detected with 99% confidence but the frame I'm processing is very blurry.
I've considered analyzing the frame and attempting to detect blurriness or detecting device movement and not analyzing frames when the device is moving a lot.
Do you have any other suggestions to only process un-blurry photos or solutions other than the ones I've proposed? Thanks
You might have issues detecting "movement" when for instance driving in car. In that case looking at something inside your car is not considered as movement while looking at something outside is (if it's not far away anyway). There can be many other cases for this.
I would start by checking if camera is in focus. It is not the same as checking if frame is blurry but it might be very close.
The other option I can think of is simply checking 2 or more sequential frames and see if they are relatively the same. To do something like that it is bast to define a grid for instance 16x16 on which you evaluate similar values. You would need to mipmap your photos which manually means resizing it by half till you get to 16x16 image (2000x1500 would become 1024x1024 -> 512x512 -> 256x256 ...). Then grab those 16x16 pixels and store them. Once you have enough frames (at least 2) you can start comparing these values. GPU is perfect for resizing but those 16x16 values are probably best evaluated on the CPU. What you need to do is basically find an average pixel difference in 2 sequential 16x16 buffers. Then use that to evaluate if detection should be enabled.
This procedure may still not be perfect but it should be relatively feasible from performance perspective. There may be some shortcuts as some tools maybe already do resizing so that you don't need to "halve" them manually. From theoretical perspective you are creating sectors and compute their average color. If all the sectors have almost same color between 2 or more frames there is a high chance the camera did not move in that time much and the image should not be blurry from movement. Still if camera is not in focus you can have multiple sequential frames that are exactly the same but in fact they are all blurry. Same happens if you detect phone movement.

Webgl full screen blending slowdown

I've tried to make an "overlay" effect in a 3d scene. After drawing stuff to the buffer, i tried to draw a full screen quad with blending enabled and the depth test disabled. On some android devices this seems to have caused a slow down.
I found this link:
The particularly slow point is the point where the drawing of a pixel needs to check what the color behind it was.
So instead of drawing a single full screen quad, i divided it up in tiles, and rendered with multiple draw calls, which seems to have caused some gain.
What may be happening here and how can this be profiled with webgl i.e. how does one come to the conclusion from the quote above?
I guess that to profile it, you simply have to test with several blending function, with or without blending enabled, etc...
Blending is not a trivial operation, and indeed we can assume that blending function which need to read pixel on buffer could induce performance lose, like all "reading" operation in OpenGL, because this can block the pipeline. I guess most of modern desktop GPU have some specific design to optimize this, but on mobile phones, this is maybe more problematic.
Anyway, if you are about to draw a full screen quad, why don't you render your quad directly using two source texture, which you blend directly in the fragment shader using a custom equation ? this way, you don't need to use blending and you avoid any back buffer reading problem.

How to do a laser effect with HLSL and DirectX 11?

I am still developing on my sci-fi video game using my own custom game engine. Now, I want to implement the combat system in my game and in the engine. While nearly everything is clear to me, I wonder how to do proper laser beams like the ones known from Star Wars, Star Trek, Babylon 5, etc.?
I did some online research, however I did not find any suitable article. I am pretty sure I searched with the wrong keywords/tags. Can you give me some hints how to implement such effects as laser beams? I think, it'd be enough to know the proper techniques or terms I need for online research...
A common way is to draw three (or more) intersecting transparent planes like this, if you excuse my crude drawing:
Each of them then bears the same laser texture that fades to black near the top and bottom edges:
If you add any subtle detail, remember to scale the texture coordinates appropriately based on the length of the beam and enable wrapping.
Finally, and most importantly, use a shader that shows only the planes facing the camera, while fading away the ones at a glancing angle to hide the fact that we're using intersecting planes and make the beam look smooth and plausible. The blending should be additive. You should also add some extra effects to the ends of the beam, again to hide the planes.

Multisampling for drawing app

I'm creating a drawing iOS application, and in need of smoothing the lines being drawn by user.
I'm using multisampling as normal.
For each time a user moves their finger, the code is like this :
Create points to make a line and then draw these points to a sampling buffer.
Resolve the sampling buffer.
The result buffer is drawn to the canvas.
The problem is when user have a big canvas (e.g: 2048x2048), the resolve process takes quite a time that it's causing the drawing to lag/choppy. The resolve process will resolve all the pixels in the buffer, regardless whether that pixels needs to be resolved or not.
I saw a drawing app like Procreate, and it draws smoothly with no lag even for a big canvas.
So, it is possible, I just don't know how to do that.
Does anybody have an idea for solution?
Thanks.
Just in case someone has the same problem with me, I found a decent solution :
Create a smaller sampling FBO just for the purpose of drawing the lines from last point to the current point. I use a 256x256 buffer.
When drawing from the last point to the current point, use this sampling buffer and then resolve.
Draw this sampling buffer to the current layer.
The result is not bad, no more lag. The only problem is that setting the appropriate transform, matrix, etc is quite hard.

XNA Adding Craters (via GPU) with a "Burn" Effect

I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials

Resources