apply color saturation on the entire renderer - xna

I would like to apply a color saturation on the entire final renderer.
Is there an easy way to do it without using shaders ?
I don't know anything about DirectX :x
I saw an "Effect" parameter in spriteBatch.Begin() but i didn't find any tutorial about it :s
Hope you can guide me.

You need a shader to this. A shader is an Effect.
You can create a new effect by right clicking a content project, clicking "Add New Item", and selecting a "Effect" file. The resulting .fx file is in the HLSL language. It will be compiled by the content pipeline, and you an load it with:
Effect myEffect = Content.Load<Effect>("myEffect");
There is an official example of how to use effects with SpriteBatch here (if you want desaturation, there's an example in there). And this blog post may also be useful.
I won't reproduce the code for a saturation effect here, but you can find several examples via Google. Here is one example on the GameDev site.

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

iOS - Displaying text with OpenGL ES 2.0

I'm surprisingly struggling a lot to display text with OpenGL ES 2.0. There are a ton of posts on stackoverflow, debating the subject, showing a few lines of code, or showing links from 2010 but working with OpenGL ES 1.x (not compatible).
But they are quite vague, and to my knowledge there is not complete code or convenient way to display text with version 2.
Do you know if there is a modern way to display text ? Like just adding a pod and writing something like this ?
[font drawText:#"This is a text" size:#12];
Thanks a lot in advance, any help would be very much liked.
EDIT 1 : I have to use OpenGL 2.0, and I can't use something else, for internal reasons.
EDIT 2 : I've found two libraries that do it :
FTGLES : It crashes at runtime when I try to use it
https://github.com/chandl34/public/tree/master/personal/c%2B%2B/Font
This one is simple but written for ES1, so I need to port the code to ES2.
EDIT 3 : ES1 and ES2 are different : ES2 works with shaders.
Since you are targeting ES2 and shaders are supported it might be a bit hard to create a tool that does this for you. But if you find such a tool it might be conflicted with your flow in many ways such as binding its own shaders and buffers, using settings on blending, depth buffer... Then if it would do that you are having trouble as you may not have a good control over how and where the text is drawn (on a 3D rotating box for instance).
But from your question (it's snippet) it seems more like all you want to do is add some 2D overlay with text which would look exactly like using an UILabel. If this is the case then I suggest you to actually use UILabel to draw these texts. You can add all of these views easily on your view that shows openGL content.
In the other case where you still need to draw text on a 3D object and want to do it very easily I suggest you to still use UILabel but create its screenshot and push it to a new (or atlas) texture. Then you can draw it as any other object. UILabel will then handle all fonts, alignments, colors, multiline and text wrapping, font size adjustments... So if you already have a system to draw a texture in the scene you should not be too far from creating yourself a tool that draws some text on the screen since you use a texture to transfer the data.
Nothing has changed since OpenGL ES1. Usually text is displayed in planar projection and created quads that are textured using font texture. There are many tutorials on this topic.
One example how to do it.
However, this is quite a lot of work when you started from scratch.
There might be better way, but depending on what you plan to do might not be suitable. You may mix UIKit with OpenGL view, having text drawn with UIKit as overlay. (UILabel, UIButton..etc.)

Strange rendering behavior with transparent texture in WebGL

I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.

Project ideation using image processing

I am in my final year of BS Computer Science. I have chosen a project in the image processing domain. But I really don't know where to start from! Here is a rough draft of my project idea:
Project Description:
Often people are faced with the problem of deciding which colors to choose to paint their walls, doors and ceilings. They want to know how their rooms will look like after applying a certain color. We want to design a mobile application that can give people the opportunity to preview their rooms/walls/ceilings, etc, with a certain color before applying the color. Through our application the user can take photos of their rooms/walls/ceilings, etc, and change their colors virtually and preview them. This will give them a good estimate about the final look of their house.
Development will be in java using open CV libraries.
Can anyone provide some help?
For starting OpenCV with android you can follow the tutorial here.
And as your above description, I think you need to do the following...
Filter out the color of room's wall or ceiling color.
Replace with your preview color.
But as your room's color is not unique, you may need to mark the color manually and segment it. Here watershed algorithm might be helpful.
And one more thing is that there might be a chance of lighting variation, so you should use HSV color space instead of RGB.
And finally this is not the full solution, but you will get some idea about how to start with your project.
ImageMagick as a famous image processing library.You may look that one too.It can perform numerous operations with images
Thanks

Can a 3dsmax model be exported to include lighting effects?

I created a really basic cylinder, added a material and a glow effect
can I export a model to include the glow effect so the model will look like the rendering ?
http://imgur.com/VaNJLj4
Clarification:
Can I export the model to .fbx or .x and have it contain the lightning information so that if I import it into unity or xna the model will look like the rendering ?
"glowing" is really a post-process kind of effect. Actually a blur. There are quite a few tutorials on how to do this in XNA, but I doubt that you can easily export this from you modeling software (as in not possible at all).
The reason is that doing it usually requires setting up multiple rendertargets, custom shader, etc, which you will have to do yourself.
The reason you need multiple rendertargets;
When you render a model, only the pixels WITHIN the (visibly) outer vertices are processed by the pixel-shader. Hence, you can't render a smooth "fade-out" outside the model itself as would be the case in your picture.
What you usually do is you use a shader that renders your object normally, but also renders a "glow-color" to an other rendertarget.
When all models have finished rendering, you do a blur-effect on this second RT.
Then you blend your main RT with the blurred glow-RT.
This is VERY superficial, and I havent done it in AGES, so please DO check out some tutorials. Also, this bloom-sample basicly does the same thing, but on the entire scene, I think: http://xbox.create.msdn.com/en-US/education/catalog/sample/bloom
Add the glow in 3dsmax with filters, then it wil render automaticly.
Minor notice, 3dsmax is a very big program with lots of possibilities, just take your time finding everything out. Believe me it will take time.

Resources