I have a plotter like this one:
The task which I have to implement is conversion of 24 bits BMP to set of instructions for this plotter. In the plotter I can change 16 common colors. The first complexity which I face is the colors reduction. The second complexity which I face is how to transform pixels into set of drawing instructions.
As drawing tool brush with oil paint will be used. It means that plotter drawing lines will not be so tiny and they will be relatively short.
Please suggest algorithms which can be used for solving this image data conversion problem?
Some initial results:
Dithering
Well I got some time for this today so here the result. You did not provide your plotter color palette so I extracted it from your resulting images but you can use any. The idea behind dithering is simple our perception integrates color on area not individual pixels so you have to use some accumulator of color difference of what is rendered and what should be rendered instead and add this to next pixel ...
This way the area have approximately the same color but only discrete number of colors are used in real. The form of how to update this info can differentiate the result branching dithering to many methods. The simple straightforward is this:
reset color accumulator to zero
process all pixels
for each pixel add its color to accumulator
find closest match of the result in your palette
render selected palette color
substract selected palette color from accumulator
Here your input image (I put them together):
Here result image for your source:
The color squares in upper left corner is just palette I used (extracted from your image).
Here code (C++) I do this with:
picture pic0,pic1,pic2;
// pic0 - source img
// pic1 - source pal
// pic2 - output img
int x,y,i,j,d,d0,e;
int r,g,b,r0,g0,b0;
color c;
List<color> pal;
// resize output to source image size clear with black
pic2=pic0; pic2.clear(0);
// create distinct colors pal[] list from palette image
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
c=pic1.p[y][x];
for (i=0;i<pal.num;i++) if (pal[i].dd==c.dd) { i=-1; break; }
if (i>=0) pal.add(c);
}
// dithering
r0=0; g0=0; b0=0; // no leftovers
for (y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
// get source pixel color
c=pic0.p[y][x];
// add to leftovers
r0+=WORD(c.db[picture::_r]);
g0+=WORD(c.db[picture::_g]);
b0+=WORD(c.db[picture::_b]);
// find closest color from pal[]
for (i=0,j=-1;i<pal.num;i++)
{
c=pal[i];
r=WORD(c.db[picture::_r]);
g=WORD(c.db[picture::_g]);
b=WORD(c.db[picture::_b]);
e=(r-r0); e*=e; d =e;
e=(g-g0); e*=e; d+=e;
e=(b-b0); e*=e; d+=e;
if ((j<0)||(d0>d)) { d0=d; j=i; }
}
// get selected palette color
c=pal[j];
// sub from leftovers
r0-=WORD(c.db[picture::_r]);
g0-=WORD(c.db[picture::_g]);
b0-=WORD(c.db[picture::_b]);
// copy to destination image
pic2.p[y][x]=c;
}
// render found palette pal[] (visual check/debug)
x=0; y=0; r=16; g=pic2.xs/r; if (g>pal.num) g=pal.num;
for (y=0;y<r;y++)
for (i=0;i<g;i++)
for (c=pal[i],x=0;x<r;x++)
pic2.p[y][x+(i*r)]=c;
where picture is my image class so here some members:
xs,ys resolution
color p[ys][xs] direct pixel access (32bit pixel format so 8 bit per channel)
clear(DWORD c) fills image with color c
The color is just union of DWORD dd and BYTE db[4] for simple channel access.
The List<> is my template (dynamic array/list>
List<int> a is the same as int a[].
add(b) add b to it at the end of list
num is number of items in list
Now to avoid too many dots (for the lifespan of your plotter sake) you can use instead different line patterns etc but that needs a lot of trial/error ... For example you can count how many times a color is used in some area and from that ratio use different filling patterns (based on lines). You need to choose between quality of image and speed of rendering/durability ...
Without more info about your plotter capabilities (speeds, method of tool change,color combination behavior) is hard to decide best method of forming control stream. My bet is you change the colors manually so you will render each colors at once. So extract all pixels with the color of first tool merge adjacent pixels to lines/curves and render ... then move to next tool color ...
Related
I have sRGB images with color casts. To remove it manually I usually use Photoshop Level Adjustments. Photoshop also have tools for that: Auto Contrast or even better Auto Tone which also takes shadows, midtones & highlights into account.
If I remove the cast manually I adjust each of the RGB channels individually so that the darkest pixels are set to pure black and the lightest to pure white and then redistribute all other values (spreading the histogram). This is a simple approach but shows good results for my images.
In my node.js app I'm using sharp for image processing which uses libvips as its processing engine. I tried to remove the cast with .normalize() but this command works on all channels together and not individual for each of the RGB channels. So it doesn't work for me.
I also asked this question on the sharp project page. I tested the suggestion from lovell to try it with hist_local but the results are not useable for me.
Now I would like to find out how this could be done using the native libvips. I've played around with nip2 GUI and different commands but could not figure out how it could be achieved:
Histogram > Equalise Histogram > Global => Picture looks over saturated
Image > Levels > Scale to 0 - 255 => Channels ar not all spreading from 0 - 255 (I don't understand exactly what this command does?)
Thanks for every hint!
Addition
Here is a example with pictures from Photoshop to show what I want.
The source image is a picture of a frame from a film negative.
Image before processing
Step1 Invert image
Image after inversion
Step2 using Auto tone in Photoshop (works the same way as my description above about manually remove the color cast)
Image after Auto Tone
This last picture is ok for me.
nip2 has a menu item for this.
Load your image and mark a region on it containing the area you'd like to be neutral. It can be any lightness, it doesn't need to be white.
Use File / Open to get the file dialog and you should see the image loaded in your workspace as a thumbnail.
Doubleclick on the thumbnail to open an image view window.
In the view window, zoom and pan to the right spot. The user guide (press F1) has a section on image navigation.
Hold down CTRL and click and drag down and right to mark a rectangular region.
Back in the main window, click Toolkits / Tasks / Capture / White balance. You should see something like:
You can drag an resize your region to change the neutral point. Use the colour picker to set what white means. You can make other whites with (for example) Colour / New / Colour from CCT and link them together.
Click Colour / New / Colour from CCT to make a colour picker from CCT (correlated colour temperature) -- the temperature in Kelvin of that white.
Set it to something interesting, like 4800 for warm white.
Click on the formula for A5.white to edit it, and enter the cell of your CCT widget (A7 in this case).
Now you can drag the region to adjust the pixels to set the neutral from, and drag the CCT slider to set the temperature.
It can be annoying to find things in the toolkit menu. There's a thing for searching toolkits: in the main window, click View / Toolkit browser. You can enter something like "white" and it'll show related toolkit entries.
Here's another answer, but using pyvips and responding to the previous comments. I didn't want to delete the first answer as it still seemed useful.
This version finds the image histogram, searches for thresholds which will select 0.5% and 99.5% of pixels in each image band, then rescales the image so that those pixel values become 0 and 255.
import sys
import pyvips
# trim off this percentage of pixels from the top and bottom
trim_percent = 0.5
def percent(hist, percentage):
"""From a histogram, find the threshold above which lie
#percentage of pixels."""
# normalised cumulative histogram
norm = hist.hist_cum().hist_norm()
# column and row profile over percentage
c, r = (norm > norm.width * percentage / 100).profile()
return r.avg()
image = pyvips.Image.new_from_file(sys.argv[1])
# photographic negative
image = image.invert()
# find image histogram, split to set of separate bands
bands = image.hist_find().bandsplit()
# for each band, the low and high thresholds
low = [percent(band, trim_percent) for band in bands]
high = [percent(band, 100 - trim_percent) for band in bands]
# rescale image
scale = [255.0 / (h - l) for h, l in zip(high, low)]
image = (image - low) * scale
image.write_to_file(sys.argv[2])
It seems to give roughly similar results to the PS button. If I run:
$ ./autolevel.py ~/pics/before.jpg x.jpg
I see:
In the meantime I've found the Simplest Color Balance Algorithm which exactly describes the problem with color casts and there you can also find a C source code.
It is exactly the same solution as John describes in his second answer but as a small piece of c-code.
I'm now trying to use it as C/C++ addon with N-API under node.js.
I'm stuck trying to figure out how to perform a color transfer from a source image to another image.
The filter that I'd like to create takes 2 images (ImageA and ImageB) of the same exact size. The filter results in an output image that changes all the pixels of a color from imageA to the color at the same pixel position of imageB.
Check the image. I needed to change the bluish pixels into green and purple pixels using ImageA as source and ImageB as a sort of color mask. As you can see only the bluish areas that overlaps the green and the purple area have been changed (consider the gray color as transparent...)
My questions are:
1) is it something doable?
2) should I use a general kernel? from my understanding a color kernel should work too but I'm not sure I can pass 2 images to a color kernel.
Could you provide a kernel code example?
A very simple non optimized pseudo code to execute on each pixel could be something like:
color func (source imageA, source imageB, color colorToChange) {
if imageA.currentpixel.color == colorToChange{
if imageB.getPixel(imageA.currentPixel).color not transparent{
return imageB.getPixel(imageA.currentPixel).color
}
}
return imageA.currentPixel.color
}
And this is the current filter that I'm using (I'm obtaining strange results with it though)
kernel vec4 coreImageKernel(sampler image, sampler msk, __color color)
{
vec4 a = sample(image, samplerCoord(image));
if (length(a - color) == 0.0){
return sample(msk, samplerCoord(msk));
}else{
return a;
}
}
It's possible with a CIKernel, but I don't believe with a simple CIColorKernel, since you need to work with two CIImages.
It's still pretty simple code. You need three inputs, in your example, "IMAGE A", "IMAGE B", and the image B pixel color to ignore. Since CoreImage works pixel-by-pixel, you just need to work with the current pixel in both images.
The basic code is this:
kernel vec4 createResult(sampler imageA, sampler imageB, vec4 backgroundColor) {
vec4 pixelA = sample(imageA, samplerCoord(imageA));
vec4 pixelB = sample(imageB, samplerCoord(imageB));
if (pixelA == backgroundColor) {
return pixelA;
} else if (pixelB == backgroundColor) {
return pixelA;
} else {
return pixelB;
}
}
This is untested kernel code, so it may have a syntax error. But here's the logic:
pass in a pixel from both images along with the background color
if pixelA is the background color, it's not a "dot" so do not change it
if pixelB is the background color, it's not a row of dots you are contents about so do not change it
if pixelB isn't the background color, output it
Note that bullet #4 can only be reach if both pixels aren't the background color.
One final note - Apple's decision to deprecate OpenGL. I spent a week after WWDC '18 working on changing my kernel code to Metal 2 code and wasn't (yet) successful. Color kernels? Easy. But something with both warp and general kernels... related to getting surrounding pixels is still eluding me. I think it's related to how I'm coding samplerTransform, but haven't had the time to work through it.
You should be good to use this as a "Metal-based" kernel since I did duplicate a simple pass-through as a CIKernel. Just be aware!
I have a couple of questions, which get tied back to a simple need - I want to use the quality histogram as a colorbar in my publication. To export it along with labels for publication, I tried just taking a snapshot with the appropriate tool, but if I use alpha/ solid white background the text/ colorbars is not visible. If I use the solid black or meshlab background, the text is white, or can not be used directly in publication.
My questions are as follows:
I know how to change the text color on meshlab window. Is there a similar function to change the text font size on meshlab window?
As a more demanding question, is there a way I can import the quality map file into matlab or some other software, and plot a custom colorbar. I will append my .qmap file here, but it seems that the color field is empty, and I can not reproduce the colors without them.
%%%%%QMAP FILE TO FOLLOW%%%%%
// COLOR BAND FILE STRUCTURE - first row: RED CHANNEL DATA - second row GREEN CHANNEL DATA - third row: BLUE CHANNEL DATA
// CHANNEL DATA STRUCTURE - the channel structure is grouped in many triples. The items of each triple represent respectively: X VALUE, Y_LOWER VALUE, Y_UPPER VALUE of each node-key of the transfer function
0;0.5;0.125;1;0.375;1;0.625;0;0.875;0;1;0;
0;0;0.125;0;0.375;1;0.625;1;0.875;0;1;0;
0;0;0.125;0;0.375;0;0.625;1;0.875;1;1;0.5;
//THE FOLLOWING 4 VALUES REPRESENT EQUALIZER SETTINGS - the first and the third values represent respectively the minimum and the maximum quality values used in histogram, the second one represent the position (in percentage) of the middle quality, and the last one represent the level of brightness as a floating point number (0 copletely dark, 1 original brightness, 2 completely white)
-0.001;0.714286;0.0004;1;
Let say I have this input image, with any number of boxes. I want to segment out these boxes, so I can eventually extract them out.
input image:
The background could anything that is continuous, like a painted wall, wooden table, carpet.
My idea was that the gradient would be the same throughout the background, and with a constant gradient. I could turn where the gradient is about the same, into zero's in the image.
Through edge detection, I would dilate and fill the regions where edges detected. Essentially my goal is to make a blob of the areas where the boxes are. Having the blobs, I would know the exact location of the boxes, thus being able to crop out the boxes from the input image.
So in this case, I should be able to have four blobs, and then I would be able to crop out four images from the input image.
This is how far I got:
segmented image:
query = imread('AllFour.jpg');
gray = rgb2gray(query);
[~, threshold] = edge(gray, 'sobel');
weightedFactor = 1.5;
BWs = edge(gray,'roberts');
%figure, imshow(BWs), title('binary gradient mask');
se90 = strel('disk', 30);
se0 = strel('square', 3);
BWsdil = imdilate(BWs, [se90]);
%figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
What a very interesting problem! Here's my solution in an attempt to solve this problem for you. This is assuming that the background has the same colour distribution throughout. First, transform your image from RGB to the HSV colour space with rgb2hsv. The HSV colour space is an ideal transform for analyzing colours. After this, I would look at the saturation and value planes. Saturation is concerned with how "pure" the colour is, while value is the intensity or brightness of the colour itself. If you take a look at the saturation and value planes for the image, this is what is shown:
im = imread('http://i.stack.imgur.com/1SGVm.jpg');
out = rgb2hsv(im);
figure;
subplot(2,1,1);
imshow(out(:,:,2));
subplot(2,1,2);
imshow(out(:,:,3));
This is what I get:
By taking a look at some locations in the gray background, it looks like the majority of the saturation are less than 0.2 as well as the elements in the value plane are greater than 0.3. As such, we want to find the opposite of those pixels to get our objects. As such, we find those pixels whose saturation is greater than 0.2 or those pixels with a value that is less than 0.3:
seg = out(:,:,2) > 0.2 | out(:,:,3) < 0.3;
This is what we get:
Almost there! There are some spurious single pixels, so I'm going to perform an opening with imopen with a line structuring element.
After this, I'll perform a dilation with imdilate to close any gaps, then use imfill with the 'holes' option to fill in the gaps, then use erosion with imerode to shrink the shapes back to their original form. As such:
se = strel('line', 3, 90);
pre = imopen(seg, c);
se = strel('square', 20);
pre2 = imdilate(pre, se);
pre3 = imfill(pre2, 'holes');
final = imerode(pre3, se);
figure;
imshow(final);
final contains the segmented image with the 4 candy boxes. This is what I get:
Try resizing the image. When you make it smaller, it would be easier to join edges. I tried what's shown below. You might have to tune it depending on the nature of the background.
close all;
clear all;
im = imread('1SGVm.jpg');
small = imresize(im, .25); % resize
grad = (double(imdilate(small, ones(3))) - double(small)); % extract edges
gradSum = sum(grad, 3);
bw = edge(gradSum, 'Canny');
joined = imdilate(bw, ones(3)); % join edges
filled = imfill(joined, 'holes');
filled = imerode(filled, ones(3));
imshow(label2rgb(bwlabel(filled))) % label the regions and show
If you have a recent version of MATLAB, try the Color Thresholder app in the image processing toolbox. It lets you interactively play with different color spaces, to see which one can give you the best segmentation.
If your candy covers are fixed or you know all the covers that are coming into the scene then Template matching is best for this. As it is independent of the background in the image.
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I just want to know
how to set a particular pixel's colour to red?
suppose x =37 y=54 and i want to change this pixel's colour to red.
I have no clue how to do it.
I have got the values of points around a particular object into an array of pixels using marching square algo.
You cannot change the pixels of an existing CGImage. You have to create a new CGImage with the pixel changed. These are the steps:
Create a CGBitmapContext with CGBitmapContextCreate.
Draw the existing CGImage into it using CGContextDrawImage.
Draw the pixel using CGContextSetFillColorWithColor and CGContextFillRect.
Create a new CGImage using CGBitmapContextCreateImage.
Instead of using CGContextSetFillColorWithColor and CGContextFillRect, you could tweak the bitmap data directly after retrieving a pointer to it with CGBitmapContextGetData. That would be faster if you're going to do it a lot.
Also, if you're going to do it a lot, you will want to create the bitmap context and draw the original image into it just once, and keep the bitmap context around for diddling. But creating the new CGImage from the bitmap context may be a bottleneck.
Your question is quite vague, but here's a general answer:
Pixel colours are usually represented with 3 or 4 bytes:
Red - Green - Blue - ( Alpha )
There should be a function available in the SDK you are using that enables you set these values for a pixel. You would set red to 255 and the others to 0 if you want a pure red colour.
If you are working with CCSprite, just change the color using the color property of the sprite:
mySprite.color=ccc3(123,234,12); //use whatever color values for red, green, blue you want
max values for red, green, blue in ccc3 are 255; when maxed out, the color is the natural color of the sprite; you cannot go brighter, but changing these values will move to other colors or darken the image if all are changed by same amount down.
for pure red, use ccc3(255,0,0)