Ink - Splash drawing using OpenGL Shaders - ios

I would like to create ink splash lines using OpenGL.
Basically I want to "render" to texture, so that I can use this textures later.
The variation I need in the line is giving by an array of values. For example,
assuming an horizontal line is required i would have an array like this:
8 7 4 2 2 2 3 4 5 6 8 9
And the result would be a line similar to a brush stroke in where higher values
represent "more ink" at that area, thus looking fatter, and smaller values looking
slimmer.
I need to make this iPhone compatible so I have to use OpenGL ES 2.0, but I'm a bit
lost to what technique I should be using. I thought that perhaps using a sprite and
drawing it multiple times on a random position (like a rain of dots) that would vary
in with if the number is big?
Ive read several papers and they do show the results i want to get but they don't talk
much about the rendering part, or how they are showing this type of drawing.
This is an example of the effect I want to achieve, the only difference is that i would
like the "splashes" to be continuous (no white between them)
Unlike the image shown, direction is not needed, since it will be drawn for a texture to
be used. It should always be horizontal in a given area (256x64) for example.

Related

in layer based drawing how can we reduce the lag while drawing or zooming UIView?

I'm working on a drawing app where every bezier path is CAShapelayer and I'm adding these sub-layers on super layer UIView(CALayer), once the points/lines exceed a certain threshold eg: 1000 CAShapelayer then the drawing, zoom, and scroll lags, is their a way to optimize this?
Couple options for trying to use thousands of layers...
First, I ran a test on a 3rd-gen iPad Pro, generating 8100 shape layers. While there was a little bit of "lag" when zoomed-out to see the full view, it certainly did't make it unusable... and I notice little to no lag when zoomed in.
Second, instead of using shape layers, you could define your own "layer" struct - tracking path, fill, border, etc. Then override draw() and only draw the paths where their bounding box intersects the draw rect.
Third, instead of using a thousands-of-layers view in your scroll view, use maybe an image view. Each time you add a new layer, draw that layer to the image in the image view. As you zoom it will become fuzzy... so each time the user ends zooming, update the image at the new scale. You'll notice a slight lag as the fuzzy image becomes clear, but that will only happen at the end of the zoom. You could even alleviate that by using "stepped" zooming - such, 100%, 200%, 400%, 800%.
Edit
I put together an Example app that:
generates 95 paths, using the Glyphs for chars "!" through "~" from Times New Roman font
paths have min 4 points, max 115 points; min 0 curves, max 55 curves
we add 33,805 CAShapeLayer (not text) layers, using 6 fill/stroke color combinations, to a 3508 x 2480 view in a scroll view
On an old iPhone 7 running iOS 13.3 ... sure, it has a "little" lag, but not what I would call unusable.
Looks like this at 1.0 Zoom Scale:
You may want to take a look at it and see if it has the same "lag" you're experiencing - https://github.com/DonMag/ShapeLayersWork
Edit 2 - 8137 layers using your hand-drawn "a" path:
Edit 3
"Chalkduster" font
generate a "grid" to fill the 3508 x 2480 view
cycle through paths
put all paths of the same color on the same layer (so 6 layers)
Here's the output:
It took over 20-seconds for the view to become visible, and, as we would expect, it's completely unusable.
The "Points: / Curves:" lines list the number of points and curves per layer -- 4-million points and almost 2-million curves. I really think you're going to need to re-think your whole approach.
As a side note... are you familiar with the Sketch App for Mac? I put some text on some layers, using Chalkduster... converted the layers to outlines (paths instead of text)... and even with a small number of layers Sketch performance gets bad.

Fading a 3D object into the background, using D3D9, SH3 & HLSL

I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.

SpriteKit sktilemapnode vertical line glitch

I am making a 2d platformer and I decided to use multiple tilemapnodes as my backgrounds. Even with 1 tile map, I get these vertical or horizontal lines that appear and disappear when I'm moving the player around the screen. See image below:
My tiles are 256x256 and I'm storing them in a tileset sks file. Not exactly sure why I'm getting this or how to get rid of this and it is quite annoying. Wondering if others experience this as well.
Considering to not use the tile maps, but I would prefer to use them if I can.
Thanks for any help with this!!!
I had the same issue and was able to solve it by "extruding" the tiled image a couple pixels. This provides a little cushion of pixels to use when the floating point issue occurs instead of displaying nothing (hence the gap). This video sums it up pretty well.
Unity: extruding tile map images
If you're using TexturePacker to generate your sprite atlas' there is an option to add this automatically without having to do it to your tile images yourself.
Hope that helps!
Sort of like the "extruding" suggested by #cheaze, I simply make the tile size in the drawing code a tiny amount larger than the required tile size. This means the assets themselves do not have to be changed.
Eg. if you assets are sized 256 x 256 and all of your calculations are based on that; draw the textures as 256.02 x 256.02 pixels in size:
[SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(256.02, 256.02)];
Only adding .02 pixel per side will overlap your tiles automatically and remove the line glitches, depending on your camera speed and frame rate.
If the problem is really bad, you can even go so far as to add half a pixel (+0.5) or an entire pixel to remove the glitches, yet the user will not be able to see the difference. (Since a one pixel difference on a retina screen is hard to distinguish).

Why does gaps between tiles in an orthogonal tilemap cocos2d game appear when running on iPhone?

I'm trying to make a tilemap-based game using cocos2d 2.1 and Tiled 0.9.1. The game runs perfectly on the simulator, but I have gaps (artifact lines) between the tiles when running on the device.
Please see the screenshot.
The diff is the difference (made in photoshop) between the original tile (taken straight from the png of the tileset) and the tile as rendered by cocos2d. As you can see, in simulator they are 100% identical. However, on the device it seems that cocos2d shrinks the tile texture vertically by just a little bit. The 1 pixel stripe is actually the texture above the troublesome tile in the tileset.
Any idea what caused this and how to fix it?
While using this answer In my case enabling CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL was not enough.
I also added the following code to AppDelegate::applicationDidFinishLaunching() function and rounded values passed to setPosition(x, y) function to nearest int.
Director::getInstance()->setProjection(Director::Projection::_2D);
I use cocos2d-x 3.4.
Not certain why this happens on devices only, but you should read in ccConfig.h for parameter CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL. This in itself is a bad kludge, but it gives you a hint as to where to look.
Basically, you should make certain that all your positions are on an exact pixel boundary, ie on non-retina devices cast them to int, and on retina devices round to the nearest exact multiple of .5. Best way to ensure that is to make all your textures w,h even numbers ... the onus is on the artist for anything that will not move. If you move things, and the final position is calculated (for example in a ccTouches move,end), make certain you do this rounding there. Beware of batch nodes : the node itself, and all its children should be on pel boundary.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Resources