In the below code I was trying to make the red component of the first pixel to be zero.
julia> image1 = load("background1.png");
julia> x = image1[1].r
0.776N0f8
julia> image1[1].r = 0
ERROR: type RGBA is immutable
Turns out the RGBA type in Julia is immutable. Is there a way I could change the individual pixels (R, G & B components) of an image?
Just make a new RGB. It's cheap to do:
image1 = load("background1.png")
x = image1[1]
image1[1] = RGB(0,x.g,x.b)
Related
consider the an image as [ 1 2 3; 4 5 6 ; 7 8 9] matrix. How we can convert given image to grayscale image. I know that we need to get r,g,b values of each pixel. And using 0.2*R+0.7*G+0.1*B formula we can get the grayscale values of each pixel.But how can I get the r,g,b values of each pixel.
Or is there a completely another method the convert given image to grayscale?
So far, you need to split each 24 bit pixel value to R,G,B component. As an example in matlab:
% If x is a 24 bit string in base 2
r_binary = x(1:8);
g_binary = x(9:16);
b_binary = x(17:24);
r_value = base2dec(r_binary,2);
g_value = base2dec(g_binary,2);
b_value = base2dec(b_binary,2);
% If x is a decimal value
r_value = rem(x,2^8);
g_value = rem(bitshift(x,-8),2^8);
b_value = rem(bitshift(x,-16),2^8);
I want to replace GetPixel and SetPixel using LockBits method, so I came across this F# lazy pixels reading
open System.Drawing
open System.Drawing.Imaging
let pixels (image:Bitmap) =
let Width = image.Width
let Height = image.Height
let rect = new Rectangle(0,0,Width,Height)
// Lock the image for access
let data = image.LockBits(rect, ImageLockMode.ReadOnly, image.PixelFormat)
// Copy the data
let ptr = data.Scan0
let stride = data.Stride
let bytes = stride * data.Height
let values : byte[] = Array.zeroCreate bytes
System.Runtime.InteropServices.Marshal.Copy(ptr,values,0,bytes)
// Unlock the image
image.UnlockBits(data)
let pixelSize = 4 // <-- calculate this from the PixelFormat
// Create and return a 3D-array with the copied data
Array3D.init 3 Width Height (fun i x y ->
values.[stride * y + x * pixelSize + i])
At the end of the code, it returns a 3D array with the copied data.
So the 3D array is a copied image, how do I edit the pixels of the 3D array such as changing color? What is the pixelSize for? Why store an image in 3D byte array not 2D?
Example if we want to use 2D array instead, and I want to change the colors of specified pixels, how do we go about doing that?
Do we do operations on the given copied image in bytearray OUTSIDE pixels function OR we do it INSIDE the pixels function before unlocking the image?
If we no longer use GetPixel or SetPixel? How do I retrieve color of the pixels from the copied image byte[]?
If you don't understand my questions, please do explain how do I use above code to do opeation such as "add 50" to R,G,B of every pixel of a given image, without getPixel, setPixel
The first component of the 3D array is the colour component. So at index 1,78,218 is the value of the blue component of the pixel at 78,218.
Like this:
Array2D.init Width Height (fun x y ->
let color i = values.[stride * y + x * pixelSize + i] |> int
new Color(color 0, color 1, color 2)
Since the images is copied, it doesn't make a difference if you mess with it before or after unlocking the image. The locking is there to make sure nobody changes the image while you do the actual copying.
The values array is a flattening of a 2D array into a flat array. The 2D-index .[x,y] is at stride * y + x * pixelSize. The RGB components then have a byte each. This explains why this finds the i'th color component at x,y:
values.[stride * y + x * pixelSize + i] |> int
To add 50 to every pixel, its easier to use the original 3D array. Suppose you have an image myImage:
pixels (myImage) |> Array3D.map ((+) 50)
The type of this is Array3D<Color>, not Image. If you need the an Image, you'll need to construct that, somehow, from the Array3D you now have.
I have a visualization output of gabor filter with 12 different orientations.I want to superimpose the vizualization image on my image of retina for vessel extraction.How do i do it?I have tried the below method.is there any other method to perform superimposition of images in matlab.
here is my code
I = getimage();
I=I(:,:,2);
lambda = 8;
theta = 0;
psi = [0 pi/2];
gamma = 0.5;
bw = 1;
N = 2;
img_in = im2double(I);
%img_in(:,:,2:3) = []; % discard redundant channels, it's gray anyway
img_out = zeros(size(img_in,1), size(img_in,2), N);
for n=1:N
gb = gabor_fn(bw,gamma,psi(1),lambda,theta)...
+ 1i * gabor_fn(bw,gamma,psi(2),lambda,theta);
% gb is the n-th gabor filter
img_out(:,:,n) = imfilter(img_in, gb, 'symmetric');
% filter output to the n-th channel
%theta = theta + 2*pi/N
%figure;
%imshow(img_out(:,:,n));
imshow(img_in); hold on;
h = imagesc(img_out(:,:,n)); % here i am getting error saying CDATA must be size[M*N]
set( h, 'AlphaData', .5 ); % .5 transparency
figure;
imshow(h);
theta = 15 * n; % next orientation
end
this is my original image
this is my visualized image got by gabor filter using orientation
this is the kind/type of image i have to get with respect to visualisation .i.e i have to impose visualized image on my original image and i have to get this type of image
With the information you have provided, my understanding is you want the third/final image to be an overlay on top of the first/initial image. I do things like this when using segmentation to detect hemorrhaging in MRI images of the brain.
First, let's set up some defintions:
I_src = source/original image
I_out = output/final image
Now, make a copy of I_src and make it a color image rather than grayscale.
I_hybrid = I_src
colorIm = gray2rgb(I_src)
Let's assume both I_src and I_out are the same visual dimensions (ie: width, height), and that I_out is strictly black-and-white (ie: monochrome). Now, we can use I_out as a mask template for alpha channel adjustments in the resulting image. This is where it gets fun.
BLACK=0;
WHITE=1;
[length width] = size(I_out);
for i = 1:1:length
for j = 1:1:width
if (I_out(i,j) == WHITE)
I_hybrid(i,j) = I_hybrid(i,j) + [0.25 0 0]a;
end
end
This will result in you getting your original image with the blood vessels in the eye being slightly brighter and tinted red. You now have a beautiful composite of your original image with the desired features highlighted, but not overwritten (ie: you can undo the highlighting by subtracting the original color vector).
I will include an example of what the output would look like, but it's noisy because I had to create it in GIMP as I don't have Matlab installed right now. The results will be similar, but yours would be much cleaner and prettier.
Please let me know how this goes.
References
"Converting Images from Grayscale to Color" http://blogs.mathworks.com/pick/2012/11/25/converting-images-from-grayscale-to-color/
I converted a png (RGBA) to jpeg (RGB) using libpng to decode the png file and applying png_set_strip_alpha to ignore alpha channels. But after conversion the output image has many spots. I think the reason is that the original image has areas whose alpha was 0, which hides the pixel regardless of its RGB value. And when I strip alpha(ie set alpha = 1), the pixel shows. So I think just using png_set_strip_alpha is not the right solution. Should I write a method myself, or is there already a way to achieve this in libpng?
There is no method for that. If you drop alpha channel libpng will give you raw RGB channels and this will "uncover" colors that were previously invisible.
You should load RGBA image and convert it to RGB yourself. The simplest way is to multiply RGB values by alpha.
This will convert RGBA bitmap to RGB in-place:
for(int i=0; i < width*height; i++) {
int r = bitmap[i*4+0],
g = bitmap[i*4+1],
b = bitmap[i*4+2],
a = bitmap[i*4+3];
bitmap[i*3+0] = r * a / 255;
bitmap[i*3+1] = g * a / 255;
bitmap[i*3+2] = b * a / 255;
}
Since each pixel memory contains 8 bit for each component Blue,Green and Red. So how can I separate these components from Image or Image Matrix. As
int Blue = f(Image(X,y));// (x,y) = Coordinate of a pixel of Image
similarly, for red and green.
So what should be function f and 2D matrix Image;
Thanks in advance
First off, you must go through the basics of OpenCV and turn your attention towards other parts of image processing. What you ask for is pretty basic and assuming you will be using OpenCV 2.1 and higher,
cv::Mat img = Read the image off the disk or do something to fill the image.
To access the RGB values
img.at<cv::Vec3b>(x,y);
But would give the values in reverse that is BGR. So make sure you note this.
Basically a cv::Vec3b type that is accessed.
img.at<cv::Vec3b>(x,y)[0];//B
img.at<cv::Vec3b>(x,y)[1];//G
img.at<cv::Vec3b>(x,y)[2];//R
or
Vec3f pixel = img.at<Vec3f>(x, y);
int b = pixel[0];
int g = pixel[1];
int r = pixel[2];
Now onto splitting the image into RGB channels you can use the following
Now down to primitive C style of OpenCV (There C and C++ style supported)
You can use the cvSplit function
IplImage* rgb = cvLoatImage("C://MyImage.bmp");
//now create three single channel images for the channel separation
IplImage* r = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
IplImage* g = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
IplImage* b = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
cvSplit(rgb,b,g,r,NULL);
OpenCV 2 CookBook Is one of the best books on OpenCV. Will help you alot.