I am using the canvas draw functions drawrect and filltext to draw onto a Tbitmap but I don't want the results antialiased. Anyone know how to do that ?
Working with OSX and Delphi XE3 (but have XE4 and XE5 if needed)
Is the problem:
the bitmap you create seems to have anti-aliasing present in the data?
or have you got a good bitmap and want to disable anti-aliasing in the viewer/display?
If it is the former, have you checked that the anti-aliasing is actually present in the bitmap, and not introduced by your viewer?
In the past I've found it useful to draw a black-on-white test pattern, and display the image at 1:1 scale. Irfanview is a nice tool for viewing at 'true' scale. Then use a loupe/peak/lens to get a close-up of the actual pixels.
Black-on-white test patterns are particularly good since you should be able to see (hopefully) that the R,G and B sub-pixels are all equally illuminated when there is no anti-aliassing present. If you draw a black-on white pattern and you get solitary bright sub-pixels then you've definitely got anti-aliassing (or some other form of corruption!).
My experience has been that image viewers often do interpolation for you, and it can be tricky to see what is going on unless you look at the actual bitmap data or have a close-up look at the unscaled image...
Hi in the drawBitmap method you need to set HighSpeed parameter to "True", in the sample below:
NewBitmap.Canvas.DrawBitmap(SmallBmp, RectF(0, 0, SmallBmp.Width, SmallBmp.Height), RectF(0, 0, NewBitmap.Width, NewBitmap.Height), 1,**True**);
rgds
Ivan
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
I have been working with OpenGL in iOS, and setting the colors with glColor4f(r,g,b,a) and then drawing my own color on a white UIImageView. I basically have a brush, which is then moved around my user's touch, and then it paints the color onto the canvas. But this color needs to be water paint (like smudged color)
Does anyone understand/knows how to get a water color like this app does, and how the background UIImageView has a texture on it?
https://itunes.apple.com/us/app/hello-watercolor/id539414526?mt=8
or checkout water paint in this. http://www.fiftythree.com/paper
I created a bounty on this as I am really having a hard time to grasp how to derive such smooth flowing colors out of the normal colors. Even if you guys point me in the right direction, or to some sample code on how I can get the effect of water-paint, it would be really helpful ^_^
And as a bonus, it would be also be helpful if you can point out to me how to get canvas on which it is painted on looks realistic, and blended with the paint? Does Blending/GLSL have to do with any of this?
Is there any sample project on this?
If you are still struggling with the basics of getting realistic looking water colors working, you may want to experiment/prototype in photoshop first.
http://www.zoepiel.com/tutorials/watercolor/ shows some very effective tricks for creating watercolor images with simple tools.
The most interesting one is to multiply a group of watercolor layers with a greyscale watercolor paper image. The texture of the paper makes some parts remain white, and other parts saturate with color, just like real watercolor.
Each layer remains 'wet' in the sense that the colors within it blend, but the layers are 'dry' with respect to each other.
She also explains some of her brush and blur settings and shows what they do.
Once you can produce the desired effect in photoshop, you'll have clear specifications of what you want to do and you'll be quite a bit closer to programming it out.
Looking at the examples you posted, it looks like they are using a simple Gaussian Blur with a radius of double your brush size. This may be an incomplete solution, but it's at least the first level.
I have been working with OpenGL in iOS, and setting the colors with glColor4f(r,g,b,a) and then drawing my own color on a white UIImageView. I basically have a brush, which is then moved around my user's touch, and then it paints the color onto the canvas. But this color needs to be water paint (like smudged color)
Does anyone understand/knows how to get a water color like this app does, and how the background UIImageView has a texture on it?
https://itunes.apple.com/us/app/hello-watercolor/id539414526?mt=8
or checkout water paint in this. http://www.fiftythree.com/paper
I created a bounty on this as I am really having a hard time to grasp how to derive such smooth flowing colors out of the normal colors. Even if you guys point me in the right direction, or to some sample code on how I can get the effect of water-paint, it would be really helpful ^_^
And as a bonus, it would be also be helpful if you can point out to me how to get canvas on which it is painted on looks realistic, and blended with the paint? Does Blending/GLSL have to do with any of this?
Is there any sample project on this?
If you are still struggling with the basics of getting realistic looking water colors working, you may want to experiment/prototype in photoshop first.
http://www.zoepiel.com/tutorials/watercolor/ shows some very effective tricks for creating watercolor images with simple tools.
The most interesting one is to multiply a group of watercolor layers with a greyscale watercolor paper image. The texture of the paper makes some parts remain white, and other parts saturate with color, just like real watercolor.
Each layer remains 'wet' in the sense that the colors within it blend, but the layers are 'dry' with respect to each other.
She also explains some of her brush and blur settings and shows what they do.
Once you can produce the desired effect in photoshop, you'll have clear specifications of what you want to do and you'll be quite a bit closer to programming it out.
Looking at the examples you posted, it looks like they are using a simple Gaussian Blur with a radius of double your brush size. This may be an incomplete solution, but it's at least the first level.
I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()
I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.