I want to know if it's possible to display an exr/hdr image or just RGB color that have different luminance in a language ?
I tried sdl, python, cpp, opengl and also qt, but any language don't have much documentation about it.
I also saw about io and oiio, all of them is doing a tone mapping when displaying the image.
What I would like is like the DisplayHDR Test from VESA that are showing some test with different luminance, meaning a monitor with 100nits would not show difference with an image with 100 or 200nits but a monitor superior to 200nits would per example.
I am not sure but I think they use video and not image.
Related
Let's say I have an image with a few colors.
I would like to replace programmatically a specific existing color by a new one.
(something simple, no need to support gradients, like I saw elsewhere).
E.g. I have an image showing a green circle and I want to display it as a red circle (every pixel initially defined with a given (R,G,B) is now displayed with a new (R,G,B).
Any idea of how to do that with the Apple ios SDK ? (or open source ...)
And btw what would be the best image file format to make this easier (png, jpg ....) ?
Thanks !
You should be able to do this using Core Image filters. the Color Cube CI filter lets you map a source color range to destination colors. You should be able to define a source color range and map it to different colors.
That's one CI Filter I didn't figure out how to use however. If you do a search on "Color Cube" in the Xcode help system there is sample code that does a "chromakey" effect that knocks out green shades to transparent. You should be able to adapt that to your needs.
I have a project on Github called CIFilterTest that shows how to use Core Image filters to process images. It's written as a general-purpose system that lets you try a wide variety of filters that use a standard set of parameters (points, colors, 1 or 2 source images, and floating-point values.) I never did take the time to generate the 3D color mapping "cube" that the color cube filter needs as input, so it doesn't allow you to use that particular filter. You'll have to look at the color Cube sample code in the Xcode docs to generate inputs for the Color Cube filter, but my sample app should help a great deal with the basic setup for doing CI based image processing.
answered similar question here:
Replace particular color of image in iOS
in short: I would suggest using CoreImage filter.
I need to present some microscopy images side by side and ensure they are directly comparable by eye in a presentation. The pictures were all taken with the same exposures, gain et cetera so the underlying pixel values should be comparable.
However, the microscopy software has a nasty habit of saving the files with one of the colour channels saturated (for some reason), so I have to process the images for presentations.
Previously I'd been using a macro which processes through a folder and calls the scripting command
run("Enhance Contrast", "saturated=0.35");
But on reflection I don't think this is the proper command to call. I don't think it would produce images that are directly comparable, by eye, to each other.
I had thought that the command
run("Color Balance...");
resetMinAndMax();
would be best as it should show the full display range. But the display values shown on the histogram do vary depending on the image.
Is this appropriate for making directly comparable images or should I run a command like
setMinAndMax();
more appropriate. With 0 as minimum and an arbitrary figure as the maximum. This is driving me mad, as I keep on getting asked about whether my images are directly comparable but I simply don't know!
Usually, resetMinAndMax(); is the best way to ensure that your images are displayed consistently.
Note however that it also depends on the bit depth of your images.
8-bit grayscale images and RGB color images are displayed with a range of 0-255 upon resetMinAndMax();
For 16-bit (short) and 32-bit (float) images, the display range is calculated from the actual minimum and maximum values of the image (see the documentation of the Brightness/Contrast dialog and the resetMinAndMax() macro function)
So for 16-bit and 32-bit images, you can use the Set Display Range dialog (Set button in the B&C window) with one of the default Unsigned 16-bit range options to ensure consistent display, or the macro call:
setMinAndMax(0, 65535);
If you want to use the images in a presentation, copy them using Edit > Copy to System, or convert them to either 8-bit or RGB before saving them and inserting them in a presentation.
The grid is a blue/green. The work is in a black ink, and has a fair bit of variety of pressures, which I want to retain.
Here's a link to a small selection.
I have Photoshop v3
My attempts have involved using Select, Color Range, and sampling some grid, then inverting.
Is there a better way?
I also have some experience with Python and PIL, if that's a useful alternative.
This is a Photoshop answer, rather than a programming answer, but that seems to match your question's needs.
I applied a Black and White filter, and enabled a Blue filter, then set the Blue channel sensitivity to 300%, like this in Photoshop CC.
and got pretty good results like this:
In an older vsersion of Photoshop, you may need to go to Image->Mode->Lab Color and then go into the Channels palette and deselect Lab leaving just a and b channels selected, then use Select->Color Range to get the blues (or maybe the blacks!!!!) before going back to RGB mode.
Native android SDK supports media files differentiation with drawable-x, where x is one of ldpi (low), mdpi (medium), hdpi (high), and xhdpi (extra high) paths for media.
Is there a way to control which media should be used for which kind of pixel-density?
You can access the the dpi, density, resolution etc. through the kivy.metrics module, which on android uses the values reported by the system. Once you have that, you can easily choose a different image source depending on its value, though I don't think there is a standard property or widget to point at.
I'm not really famililar with the mechanism and advantages of the normal java method here, but it would probably be quite easy to make something very similar in kivy. For instance, you could easily make your own image widget subclass that chooses a specific image of a certain size depending on the pixel density.
I am in my final year of BS Computer Science. I have chosen a project in the image processing domain. But I really don't know where to start from! Here is a rough draft of my project idea:
Project Description:
Often people are faced with the problem of deciding which colors to choose to paint their walls, doors and ceilings. They want to know how their rooms will look like after applying a certain color. We want to design a mobile application that can give people the opportunity to preview their rooms/walls/ceilings, etc, with a certain color before applying the color. Through our application the user can take photos of their rooms/walls/ceilings, etc, and change their colors virtually and preview them. This will give them a good estimate about the final look of their house.
Development will be in java using open CV libraries.
Can anyone provide some help?
For starting OpenCV with android you can follow the tutorial here.
And as your above description, I think you need to do the following...
Filter out the color of room's wall or ceiling color.
Replace with your preview color.
But as your room's color is not unique, you may need to mark the color manually and segment it. Here watershed algorithm might be helpful.
And one more thing is that there might be a chance of lighting variation, so you should use HSV color space instead of RGB.
And finally this is not the full solution, but you will get some idea about how to start with your project.
ImageMagick as a famous image processing library.You may look that one too.It can perform numerous operations with images
Thanks