To convert an image to grayscale, I've tried:
MagickQuantizeImage(wand, 256, GRAYColorspace, 0, MagickFalse, MagickFalse);
and
MagickTransformImageColorspace(wand, GRAYColorspace);
in my program and both works as expected.
But what's the difference between them? Image quality? Side effects? Effiency?
Thanks in advance.
My guess, and it is a guess, would be flexibility. The former allows more fine-grained control whereas the latter applies defaults that happen to be adequate for your purposes.
Related
I have a certain library that uses WebGL1 to render things.
It heavily uses float textures and instanced rendering.
Nowadays it seems like support for WebGL1 is pretty weird, with some devices supporting for example WebGL2, where these extensions are core, but not supporting WebGL1, or supporting it, but not the extensions.
At the same time, support for WebGL2 isn't amazing. Maybe one day it will be, but for not it isn't.
I started looking at what it will take to support both versions.
For shaders, I think I can mostly get away with #defineing things. For example, #define texture2D texture and other similar things.
When it comes to extensions, it becomes more problematic, since the extension objects no longer exist.
As an experiment, I tried copying the extension properties into the context object, e.g. gl.drawArraysInstanced = (...args) => ext.drawArraysInstancedANGLE(...args).
When it comes to textures, not much needs to be changed, perhaps add something like gl.RGBA8 = gl.RGBA when running in WebGL1, thus it will "just work" when running in WebGL2.
So then comes the question...did anyone try this?
I am worried about it hurting performance, especially the extra indirection for function calls.
It will also make reading the code less obvious if the assumption is that it can run in WebGL1. After all, no WebGL1 context has drawArraysInstanced, or RGBA8. This also bothers Typescript typing and other minor things.
The other option is to have branches all over the code. Two versions of shaders (or #ifdef trickery), lots of brancing for every place where texture formats are needed, and every place where instancing is done.
Having something like what follows all over the place is pretty ugly:
if (version === 1) {
instancedArrays.vertexAttribDivisorANGLE(m0, 1);
instancedArrays.vertexAttribDivisorANGLE(m1, 1);
instancedArrays.vertexAttribDivisorANGLE(m2, 1);
instancedArrays.vertexAttribDivisorANGLE(m3, 1);
} else {
gl.vertexAttribDivisor(m0, 1);
gl.vertexAttribDivisor(m1, 1);
gl.vertexAttribDivisor(m2, 1);
gl.vertexAttribDivisor(m3, 1);
}
Finally, maybe there's a third way I didn't think about.
Got any recommendations?
Unfortunately I think most answers will be primarily opinion based.
The first question is why support both? If your idea runs fine on WebGL1 then just use WebGL1. If you absolutely must have WebGL2 features then use WebGL2 and realize that many devices don't support WebGL2.
If you're intent on doing it twgl tries to make it easier by providing a function that copies all the WebGL1 extensions into their WebGL2 API positions. For like you mentioned, instead of
ext = gl.getExtension('ANGLE_instanced_arrays');
ext.drawArraysInstancedANGLE(...)
You instead do
twgl.addExtensionsToContext(gl);
gl.drawArraysInstanced(...);
I don't believe there will be any noticeable perf difference. Especially since those functions are only called a few hundred times a frame the wrapping is not going to be the bottleneck in your code.
The point though is not really to support WebGL1 and WebGL2 at the same time. Rather it's just to make it so the way you write code is the same for both APIs.
Still, there are real differences in the 2 APIs. For example to use a FLOAT RGBA texture in WebGL1 you use
gl.texImage2D(target, level, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT, ...)
In WebGL2 it's
gl.texImage2D(target, level, gl.RGBA32F, width, height, 0, gl.RGBA, gl.FLOAT, ...)
WebGL2 will fail if you try to call it the same as WebGL1 in this case. There are other differences as well.
Will work just fine in WebGL1 and WebGL2. The spec specifically says that combination results in RGBA8 on WebGL2.
Note though that your example of needing RGBA8 is not true.
gl.texImage2D(target, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, ...)
The biggest difference though is there is no reason to use WebGL2 if you can get by with WebGL1. Or, visa versa, if you need WebGL2 then you probably can not easily fall back to WebGL1
For example you mentioned using defines for shaders but what are you going to do about features in WebGL2 that aren't in WebGL1. Features like textureFetch or the mod % operator, or integer attributes, etc.... If you need those features you mostly need to write a WebGL2 only shader. If you don't need those features then there was really no point in using WebGL2 in the first place.
Of course if you really want to go for it maybe you want to make a fancier renderer if the user has WebGL2 and fall back to a simpler one if WebGL1.
TD;LR IMO Pick one or the other
I found this question when writing a documentation of my library, which has many objectives, but one of them is exactly this - to support WebGL1 and WebGL2 in the same time for higher cross-device compatibility.
https://xemantic.github.io/shader-web-background/
For example I discovered with BrowserStack that Samsung phones don't support rendering to floating point textures in WebGL1, while it is perfectly fine for them in WebGL2. In the same time WebGL2 will never appear on Apple devices, but rendering to half floating point textures is pretty well supported.
My library is not providing full WebGL abstraction, but rather will configure pipeline for fragment shaders. Here is the source on GitHub with the WebGL strategy code depending on the version:
https://github.com/xemantic/shader-web-background/blob/main/src/main/js/webgl-utils.js
Therefore to answer your question, it is doable and desirable, but doing it in a totally generic way, for every WebGL feature, might be quite challenging. I guess the first question to ask is "What would be the common denominator?" in terms of supported extensions.
Does anybody knows how to create/apply grunge or vintage-worn filters? I'm creating an iOS app to apply filters to photos, just for fun and to learn more about CIImage. Now, I'm using Core-Image to apply CIGaussianBlur, CIGloom, and the like through commands such as ciFilter.setValue(value, forKey:key) and corresponding commands.
So far, core image filters such as blur, color adjustment, sharpen , stylize work OK. But I'd like to learn how to apply one of those grunge, vintage-worn effects available in other photo editing apps, something like this:
Does anybody knows how to create/apply those kind of filters?
Thanks!!!
You have two options.
(1) Use "canned" filters in a chain. If the output of one filter is the input of the next, code things that way. It won't waste any resources until you actually call for output.
(2) Write your own kernel code. It can be a color kernel that mutates a single pixel independently, a warp kernel that checks the values of a pixel and it's surrounding ones to generate the output pixel, or a general kernel that isn't optimized like the last two. Either way, you can use GLSL pretty much for the code (it's pretty much C language for the GPU).
Okay, there's a third option - a combination of the two above options. Also, in iOS 11 and above, you can write kernels using Metal 2.
I am doing image manipulation on the png images. I have the following problem. After saving an image with imwrite() function, the size of the image is increased. For example previously image is 847KB, after saving it becomes 1.20 MB. Here is a code. I just read an image and then save it, but the size is increased. I tried to set compression params but it doesn't help.
Mat image;
image = imread("5.png", -1);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
compression_params.push_back(0);
imwrite("output.png",image,compression_params);
What could be a problem? Any help please.
Thanks.
PNG has several options that influence the compression: deflate compression level (0-9), deflate strategy (HUFFMAN/FILTERED), and the choice (or strategy for dynamically chosing) for the internal prediction error filter (AVERAGE, PAETH...).
It seems OpenCV only lets you change the first one, and it hasn't a good default value for the second. So, it seems you must live with that.
Update: looking into the sources, it seems that compression strategy setting has been added (after complaints), but it isn't documented. I wonder if that source is released. Try to set the option CV_IMWRITE_PNG_STRATEGY with Z_FILTERED and see what happens
See the linked source code for more details about the params.
#Karmar, It's been many years since your last edit.
I had similar confuse to yours in June, 2021. And I found out sth which might benefit others like us.
PNG files seem to have this thing called mode. Here, let's focus only on three modes: RGB, P and L.
To quickly check an image's mode, you can use Python:
from PIL import Image
print(Image.open("5.png").mode)
Basically, when using P and L you are attributing 8 bits/pixel while RGB uses 3*8 bits/pixel.
For more detailed explanation, one can refer to this fine stackoverflow post: What is the difference between images in 'P' and 'L' mode in PIL?
Now, when we use OpenCV to open a PNG file, what we get will be an array of three channels, regardless which mode that
file was saved into. Three channels with data type uint8, that means when we imwrite this array into a file, no matter
how hard you compress it, it will be hard to beat the original file if it was saved in P or L mode.
I guess #Karmar might have already had this question solved. For future readers, check the mode of your own 5.png.
I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).
Is there anything faster to ImageMagick ? I'm processing images with different kind of filters so to create effects like old photo, oil paint etc.
Check GraphicsMagick. It is based on IM, but faster and less complicated.
Performance copmarison (GraphicsMagick vs ImageMagick):
http://www.graphicsmagick.org/benchmarks.html