I want to change image effect of an image like sepia, charcoal, monochrome, emboss, negative etc.
also I want to change hue, saturation, brightness, contrast also.
You can extract image data using Bitmap.getARGB(...) to an array. API link here.
Then you can apply any filtering effect on that data to get you image ready. You can check this link, to get some helpful hints about how to apply filter on images.
And to make an image with the prepared data use Bitmap.setARGB(...). API link here.
Related
Let's say I have an image with a few colors.
I would like to replace programmatically a specific existing color by a new one.
(something simple, no need to support gradients, like I saw elsewhere).
E.g. I have an image showing a green circle and I want to display it as a red circle (every pixel initially defined with a given (R,G,B) is now displayed with a new (R,G,B).
Any idea of how to do that with the Apple ios SDK ? (or open source ...)
And btw what would be the best image file format to make this easier (png, jpg ....) ?
Thanks !
You should be able to do this using Core Image filters. the Color Cube CI filter lets you map a source color range to destination colors. You should be able to define a source color range and map it to different colors.
That's one CI Filter I didn't figure out how to use however. If you do a search on "Color Cube" in the Xcode help system there is sample code that does a "chromakey" effect that knocks out green shades to transparent. You should be able to adapt that to your needs.
I have a project on Github called CIFilterTest that shows how to use Core Image filters to process images. It's written as a general-purpose system that lets you try a wide variety of filters that use a standard set of parameters (points, colors, 1 or 2 source images, and floating-point values.) I never did take the time to generate the 3D color mapping "cube" that the color cube filter needs as input, so it doesn't allow you to use that particular filter. You'll have to look at the color Cube sample code in the Xcode docs to generate inputs for the Color Cube filter, but my sample app should help a great deal with the basic setup for doing CI based image processing.
answered similar question here:
Replace particular color of image in iOS
in short: I would suggest using CoreImage filter.
For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!
Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.
the images created in the microscope that we use in our lab don't have a scale bar. Can i use the information stored at the image itself/information from the microscope itself, in order to properly set the scale, and convert pixels into nm?
I've downloaded the plugin called "microscope scale", but I don't understand how to use it.
thank you!
Reading the description of the Microscope Scale plugin it sounds like you would need to manually edit the plugin's source code to set the scale for your data.
Instead I would suggest using the Bio-Formats Importer to open your data, as it has a good chance of reading calibration values directly from your images (if they were saved by the microscope).
If Bio-Formats doesn't work, then you can manually set the scale using the Spatial Calibration plugin provided with Fiji to convert pixels to nm.
Using GIMP, I am attempting to generate a large number of the same image but with different colors. In order to preserve the "shadowing", I am using the below steps. These steps get me exactly what I want in the end. The problem is, it's very tedious repeating them by hand over and over. There has to be a better way.
GIMP's batch scripting seems a little daunting, so I'm hoping to get some suggestions on how to automate this. Basically, what would be nice, is I'd like to essentially specify an array or list of colors...and then "automagically" perform the steps below to generate the output with the desired color.
Steps I'm doing by hand...
1.) Load a base PNG file that has an alpha channel.
2.) Add a new transparent layer.
3.) Activate the layer.
4.) Change mode to "multiply".
Then, for a range of different colors, I do the following...
5.) Select a foreground color.
6.) Apply bucket fill (fill similar colors, fill transparent areas, default threshold, fill by composite).
7.) Save the new PNG.
8.) Go to Step #5.
Here's kind of a cheesy representation of the effect I'm trying to achieve...
I'm also open to other non-GIMP suggestions as well.
Thanks for any and all help and suggestions.
I can offer you a nice Javascript example that do this.
try:
https://stackoverflow.com/a/9304367/1726419
there is a link there that actually do what you wand in JS - you can translate it to many other languages...
I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).