Related
The XYZ color space encompasses all possible colors, not just those which can be generated by a particular device like a monitor. Not all XYZ triplets represent a color that is physically possible. Is there a way, given an XYZ triplet, to determine if it represents a real color?
I wanted to generate a CIE 1931 chromaticity diagram (seen bellow) for myself, but wasn't sure how to go about it. It's easy to, for example, take all combinations of sRGB triplets and then transform them into the xy coordinates of the chromaticity diagram and then plot them. You cannot use this same approach in the XYZ color space though since not all combinations are valid colors. So far the best I have come up with is a stochastic approach, where I generate a random spectral distribution by summing a random number of random Gaussians, then converting it to XYZ using the standard observer functions.
Having thought about it a little more I felt the obvious solution is to generate a list of xy points around the edge of spectral locus, corresponding to pure monochromatic colors. It seems to me that this can be done by directly inputting the visible frequencies (~380-780nm) into the CIE XYZ standard observer color matching functions. Treating these points like a convex polygon you could determine if a point is within the spectral locus using one algorithm or another. In my case, since what I really wanted to do is simply generate the chromaticity diagram, I simply input these points into a graphics library's polygon drawing routine and then for each pixel of the polygon I can transform it into sRGB.
I believe this solution is similar to the one used by the library that Kel linked in a comment. I'm not entirely sure, as I am not familiar with Python.
function RGBfromXYZ(X, Y, Z) {
const R = 3.2404542 * X - 1.5371385 * Y - 0.4985314 * Z
const G = -0.969266 * X + 1.8760108 * Y + 0.0415560 * Z
const B = 0.0556434 * X - 0.2040259 * Y + 1.0572252 * Z
return [R, G, B]
}
function XYZfromYxy(Y, x, y) {
const X = Y / y * x
const Z = Y / y * (1 - x - y)
return [X, Y, Z]
}
function srgb_from_linear(x) {
if (x <= 0.0031308) {
return x * 12.92
} else {
return 1.055 * Math.pow(x, 1/2.4) - 0.055
}
}
// Analytic Approximations to the CIE XYZ Color Matching Functions
// from Sloan http://jcgt.org/published/0002/02/01/paper.pdf
function xFit_1931(x) {
const t1 = (x - 442) * (x < 442 ? 0.0624 : 0.0374)
const t2 = (x -599.8) * (x < 599.8 ? 0.0264 : 0.0323)
const t3 = (x - 501.1) * (x < 501.1 ? 0.0490 : 0.0382)
return 0.362 * Math.exp(-0.5 * t1 * t1) + 1.056 * Math.exp(-0.5 * t2 * t2) - 0.065 * Math.exp(-0.5 * t3 * t3)
}
function yFit_1931(x) {
const t1 = (x - 568.8) * (x < 568.8 ? 0.0213 : 0.0247)
const t2 = (x - 530.9) * (x < 530.9 ? 0.0613 : 0.0322)
return 0.821 * Math.exp(-0.5 * t1 * t1) + 0.286 * Math.exp(-0.5 * t2 * t2)
}
function zFit_1931(x) {
const t1 = (x - 437) * (x < 437 ? 0.0845 : 0.0278)
const t2 = (x - 459) * (x < 459 ? 0.0385 : 0.0725)
return 1.217 * Math.exp(-0.5 * t1 * t1) + 0.681 * Math.exp(-0.5 * t2 * t2)
}
const canvas = document.createElement("canvas")
document.body.append(canvas)
canvas.width = canvas.height = 512
const ctx = canvas.getContext("2d")
const locus_points = []
for (let i = 440; i < 650; ++i) {
const [X, Y, Z] = [xFit_1931(i), yFit_1931(i), zFit_1931(i)]
const x = (X / (X + Y + Z)) * canvas.width
const y = (Y / (X + Y + Z)) * canvas.height
locus_points.push([x, y])
}
ctx.beginPath()
ctx.moveTo(...locus_points[0])
locus_points.slice(1).forEach(point => ctx.lineTo(...point))
ctx.closePath()
ctx.fill()
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height)
for (let y = 0; y < canvas.height; ++y) {
for (let x = 0; x < canvas.width; ++x) {
const alpha = imageData.data[(y * canvas.width + x) * 4 + 3]
if (alpha > 0) {
const [X, Y, Z] = XYZfromYxy(1, x / canvas.width, y / canvas.height)
const [R, G, B] = RGBfromXYZ(X, Y, Z)
const r = Math.round(srgb_from_linear(R / Math.sqrt(R**2 + G**2 + B**2)) * 255)
const g = Math.round(srgb_from_linear(G / Math.sqrt(R**2 + G**2 + B**2)) * 255)
const b = Math.round(srgb_from_linear(B / Math.sqrt(R**2 + G**2 + B**2)) * 255)
imageData.data[(y * canvas.width + x) * 4 + 0] = r
imageData.data[(y * canvas.width + x) * 4 + 1] = g
imageData.data[(y * canvas.width + x) * 4 + 2] = b
}
}
}
ctx.putImageData(imageData, 0, 0)
I'm implementing the sobel filter according to the following pseudocode taken from Wikipedia:
function sobel(A : as two dimensional image array)
Gx=[-1 0 1; -2 0 2; -1 0 1]
Gy=[-1 -2 -1; 0 0 0; 1 2 1]
rows = size(A,1)
columns = size(A,2)
mag=zeros(A)
for i=1:rows-2
for j=1:columns-2
S1=sum(sum(Gx.*A(i:i+2,j:j+2)))
S2=sum(sum(Gy.*A(i:i+2,j:j+2)))
mag(i+1,j+1)=sqrt(S1.^2+S2.^2)
end for
end for
threshold = 70 %varies for application [0 255]
output_image = max(mag,threshold)
output_image(output_image==round(threshold))=0;
return output_image
end function
However, upon applying this algorithm, I'm getting many output_image values above 255, and that makes sense considering how Gx and Gy are defined. How can I modify this algorithm such that the values don't go above 255 and finally that the results look more like this?:
--- Edit ---
There was some error in my filter implementation and I think that's why the values were above 255. After fixing the error, the values range between 0 - 16. Since now all values are below 70, applying a threshold of 70 sends everything to 0. So I set a lower threshold, 5, and multiplied the rest of the values by 10 (to enhance the edges since they are in the 5-16 range) and got the following result:
I also tried the normalization method mentioned in the comments but got a similar noisy image.
--- Edit 2 ---
Since the actual code was requested, I'm posting the code, which is written in Halide.
int main(int argc, char **argv) {
Var x, y, k, c;
Buffer<uint8_t> left_buffer = load_image("images/stereo/bike.jpg");
Expr clamped_x = clamp(x, 0, left_buffer.width() - 1);
Expr clamped_y = clamp(y, 0, left_buffer.height() - 1);
Func left_original("left_original");
left_original(x, y) = left_buffer(clamped_x, clamped_y);
left_original.compute_root();
// 3x3 sobel filter
Buffer<uint8_t> sobel_1(3);
sobel_1(0) = -1;
sobel_1(1) = 0;
sobel_1(2) = 1;
Buffer<uint8_t> sobel_2(3);
sobel_2(0) = 1;
sobel_2(1) = 2;
sobel_2(2) = 1;
RDom conv_x(-1, 2);
RDom conv_y(-1, 2);
Func output_x_inter("output_x_inter");
output_x_inter(x, y) = sum(left_original(x - conv_x, y) * sobel_1(conv_x + 1));
output_x_inter.compute_root();
Func output_x("output_x");
output_x(x, y) = sum(output_x_inter(x, y - conv_y) * sobel_2(conv_y + 1));
output_x.compute_root();
Func output_y("output_y");
output_y(x, y) = sum(conv_y, sum(conv_x, left_original(x - conv_x, y - conv_y) * sobel_2(conv_x + 1)) * sobel_1(conv_y + 1));
output_y.compute_root();
Func output("output");
output(x, y) = sqrt(output_x(x, y) * output_x(x, y) + output_y(x, y) * output_y(x, y));
output.compute_root();
output.trace_stores();
RDom img(0, left_buffer.width(), 0, left_buffer.height());
Func max("max");
max(k) = f32(0);
max(0) = maximum(output(img.x, img.y));
max.compute_root();
Func min("min");
min(k) = f32(0);
min(0) = minimum(output(img.x, img.y));
min.compute_root();
Func output_u8("output_u8");
// The following line sends all the values of output <= 5 to zero, and multiplies the resulting values by 10 to enhance the intensity of the edges.
output_u8(x, y) = u8(select(output(x, y) <= 5, 0, output(x, y))*10);
output_u8.compute_root();
output_u8.trace_stores();
Buffer<uint8_t> output_buff = output_u8.realize(left_buffer.width(), left_buffer.height());
save_image(output_buff, "images/stereo/sobel/out.png");
}
--- Edit 3 ---
As one answer suggested, I changed all types to float except the last one, which must be unsigned 8-bit type. Here's the code, and the result that I'm getting.
int main(int argc, char **argv) {
Var x, y, k, c;
Buffer<uint8_t> left_buffer = load_image("images/stereo/bike.jpg");
Expr clamped_x = clamp(x, 0, left_buffer.width() - 1);
Expr clamped_y = clamp(y, 0, left_buffer.height() - 1);
Func left_original("left_original");
left_original(x, y) = left_buffer(clamped_x, clamped_y);
left_original.compute_root();
// 3x3 sobel filter
Buffer<float_t> sobel_1(3);
sobel_1(0) = -1;
sobel_1(1) = 0;
sobel_1(2) = 1;
Buffer<float_t> sobel_2(3);
sobel_2(0) = 1;
sobel_2(1) = 2;
sobel_2(2) = 1;
RDom conv_x(-1, 2);
RDom conv_y(-1, 2);
Func output_x_inter("output_x_inter");
output_x_inter(x, y) = f32(sum(left_original(x - conv_x, y) * sobel_1(conv_x + 1)));
output_x_inter.compute_root();
Func output_x("output_x");
output_x(x, y) = f32(sum(output_x_inter(x, y - conv_y) * sobel_2(conv_y + 1)));
output_x.compute_root();
RDom img(0, left_buffer.width(), 0, left_buffer.height());
Func output_y("output_y");
output_y(x, y) = f32(sum(conv_y, sum(conv_x, left_original(x - conv_x, y - conv_y) * sobel_2(conv_x + 1)) * sobel_1(conv_y + 1)));
output_y.compute_root();
Func output("output");
output(x, y) = sqrt(output_x(x, y) * output_x(x, y) + output_y(x, y) * output_y(x, y));
output.compute_root();
Func max("max");
max(k) = f32(0);
max(0) = maximum(output(img.x, img.y));
max.compute_root();
Func min("min");
min(k) = f32(0);
min(0) = minimum(output(img.x, img.y));
min.compute_root();
// output_inter for scaling
Func output_inter("output_inter");
output_inter(x, y) = f32((output(x, y) - min(0)) * 255 / (max(0) - min(0)));
output_inter.compute_root();
Func output_u8("output_u8");
output_u8(x, y) = u8(select(output_inter(x, y) <= 70, 0, output_inter(x, y)));
output_u8.compute_root();
output_u8.trace_stores();
Buffer<uint8_t> output_buff = output_u8.realize(left_buffer.width(), left_buffer.height());
save_image(output_buff, "images/stereo/sobel/out.png");
}
--- Edit 4 ---
As #CrisLuengo suggested, I simplified my code and outputted the result of the following:
output(x, y) = u8(min(sqrt(output_x(x, y) * output_x(x, y) + output_y(x, y) * output_y(x, y)), 255));
Since many values are way above 255, these many values are clamped to 255 and thus we get a "washed out" image:
I don't know the Halide syntax, I've just learned it exists. But I can point out one clear problem:
Buffer<uint8_t> sobel_1(3);
sobel_1(0) = -1;
You are assigning -1 to a uint8 type. That doesn't work as intended. Make the kernel a float, and do all computations as floats, then scale the result and store it in your uint8 output image.
When computing using small integer types, one has to be very careful with overflow and underflow. The Sobel computations could likely be done in the (signed) int16 type, but in my experience there is no advantage in that over using the float type, then scaling (or clamping) and casting the result to the output image's type.
I figured it out finally, but I'm not sure why Halide is behaving this way.
When I do this:
RDom conv_x(-1, 2);
RDom conv_y(-1, 2);
Func output_x_inter("output_x_inter");
output_x_inter(x, y) = f32(sum(left_original(x - conv_x, y) * sobel_1(conv_x + 1)));
Func output_x("output_x");
output_x(x, y) = f32(sum(output_x_inter(x, y - conv_y) * sobel_2(conv_y + 1)));
Things don't work. But when I "unroll" the sum function things work:
Func output_x_inter("output_x_inter");
output_x_inter(x, y) = f32(left_original(x + 1, y) * sobel_1(0) + left_original(x, y) * sobel_1(1) + left_original(x - 1, y) * sobel_1(2));
Func output_x("output_x");
output_x(x, y) = f32(output_x_inter(x, y + 1) * sobel_2(0) + output_x_inter(x, y) * sobel_2(1) + output_x_inter(x, y - 1) * sobel_2(2));
For short labels (1-2 symbols) callout shape is shown without pointer which may be confusing with complex charts. Can this be fixed somehow?
Update1:
Editor: https://stackblitz.com/edit/js-s6upn2
Result: https://js-s6upn2.stackblitz.io
Update2 (fixed workaround):
top and bottom verticalAlign handled, editor updated
Highcharts.SVGRenderer.prototype.symbols.callout = function(x, y, w, h, options) {
const arrowLength = 6;
const halfDistance = 6;
const r = Math.min((options && options.r) || 0, w, h);
// const safeDistance = r + halfDistance;
const anchorX = options && options.anchorX;
const anchorY = options && options.anchorY;
const path = [
'M', x + r, y,
'L', x + w - r, y, // top side
'C', x + w, y, x + w, y, x + w, y + r, // top-right corner
'L', x + w, y + h - r, // right side
'C', x + w, y + h, x + w, y + h, x + w - r, y + h, // bottom-right corner
'L', x + r, y + h, // bottom side
'C', x, y + h, x, y + h, x, y + h - r, // bottom-left corner
'L', x, y + r, // left side
'C', x, y, x, y, x + r, y // top-left corner
];
path.splice(
anchorY >= 0 ? 23 : 3,
3,
'L', anchorX + halfDistance > x + w ? x + w : anchorX + halfDistance, anchorY >= 0 ? y + h : y,
anchorX, anchorY >= 0 ? y + h + arrowLength : y - arrowLength,
anchorX - halfDistance >= x ? anchorX - halfDistance : x, anchorY >= 0 ? y + h : y,
x + r, anchorY >= 0 ? y + h : y
);
return path;
};
Update3
Actually looks like this realization should handle many more corner cases, for example it breaks "split" type tooltips
So I suggest to call this shape "callout2" and use for top/bottom data labels only.
This problem looks like bug, so I reported it on Highcharts github: https://github.com/highcharts/highcharts/issues/10020
To workaround, you can overwrite the callout shape:
Highcharts.SVGRenderer.prototype.symbols.callout = function(x, y, w, h, options) {
var arrowLength = 6,
halfDistance = 6,
r = Math.min((options && options.r) || 0, w, h),
safeDistance = r + halfDistance,
anchorX = options && options.anchorX,
anchorY = options && options.anchorY,
path;
path = [
'M', x + r, y,
'L', x + w - r, y, // top side
'C', x + w, y, x + w, y, x + w, y + r, // top-right corner
'L', x + w, y + h - r, // right side
'C', x + w, y + h, x + w, y + h, x + w - r, y + h, // bottom-rgt
'L', x + r, y + h, // bottom side
'C', x, y + h, x, y + h, x, y + h - r, // bottom-left corner
'L', x, y + r, // left side
'C', x, y, x, y, x + r, y // top-left corner
];
path.splice(
23,
3,
'L', anchorX + halfDistance, y + h,
anchorX, y + h + arrowLength,
anchorX - halfDistance, y + h,
x + r, y + h
);
return path;
}
Live demo: http://jsfiddle.net/BlackLabel/o0j76Lbt/
few days ago I've just started learning RenderScript. I managed to create some simple image processing filters e.g. grayscale, color change.
Now I'm working on Canny edge filters with no success.
Question: Why ImageView displays black image and how to solve it?
I'am using implementation of Canny egde filter made by arekolek github
optional: Can I compute it faster?
I ended with all code wrote in on method "runEdgeFilter(...)" which runs when i clicked image on my device, to make sure I'am not messing with imageView in other place. Code that i use so far.
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.support.v8.renderscript.*;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.ImageView;
public class MainActivity extends AppCompatActivity {
private static final float THRESHOLD_MULT_LOW = 0.66f * 0.00390625f;
private static final float THRESHOLD_MULT_HIGH = 1.33f * 0.00390625f;
private ImageView imageView;
private Bitmap img;
private boolean setThresholds = true;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
imageView = (ImageView) findViewById(R.id.imageView);
img = BitmapFactory.decodeResource(getResources(), R.drawable.test_img_no_dpi2);
imageView.setImageBitmap(img);
}
public void imageClicked(View view) {
runEdgeFilter(img, this);
}
private void runEdgeFilter(Bitmap image, Context context) {
int width = image.getWidth();
int height = image.getHeight();
RenderScript rs = RenderScript.create(context);
Allocation allocationIn = Allocation.createFromBitmap(rs, image);
Type.Builder tb;
tb = new Type.Builder(rs, Element.F32(rs)).setX(width).setY(height);
Allocation allocationBlurred = Allocation.createTyped(rs, tb.create());
Allocation allocationMagnitude = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.I32(rs)).setX(width).setY(height);
Allocation allocationDirection = Allocation.createTyped(rs, tb.create());
Allocation allocationEdge = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.I32(rs)).setX(256);
Allocation allocationHistogram = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation allocationOut = Allocation.createTyped(rs, tb.create());
ScriptC_edge edgeFilter = new ScriptC_edge(rs);
ScriptIntrinsicHistogram histogram = ScriptIntrinsicHistogram.create(rs, Element.U8(rs));
histogram.setOutput(allocationHistogram);
edgeFilter.invoke_set_histogram(allocationHistogram);
edgeFilter.invoke_set_blur_input(allocationIn);
edgeFilter.invoke_set_compute_gradient_input(allocationBlurred);
edgeFilter.invoke_set_suppress_input(allocationMagnitude, allocationDirection);
edgeFilter.invoke_set_hysteresis_input(allocationEdge);
edgeFilter.invoke_set_thresholds(0.2f, 0.6f);
histogram.forEach_Dot(allocationIn);
int[] histogramOutput = new int[256];
allocationHistogram.copyTo(histogramOutput);
if(setThresholds) {
int median = width * height / 2;
for (int i = 0; i < 256; ++i) {
median -= histogramOutput[i];
if (median < 1) {
edgeFilter.invoke_set_thresholds(i * THRESHOLD_MULT_LOW, i * THRESHOLD_MULT_HIGH);
break;
}
}
}
edgeFilter.forEach_blur(allocationBlurred);
edgeFilter.forEach_compute_gradient(allocationMagnitude);
edgeFilter.forEach_suppress(allocationEdge);
edgeFilter.forEach_hysteresis(allocationOut);
allocationOut.copyTo(image);
allocationIn.destroy();
allocationMagnitude.destroy();
allocationBlurred.destroy();
allocationDirection.destroy();
allocationEdge.destroy();
allocationHistogram.destroy();
allocationOut.destroy();
histogram.destroy();
edgeFilter.destroy();
rs.destroy();
imageView.setImageBitmap(image);
}
}
renderscript edge.rs:
#pragma version(1)
#pragma rs java_package_name(com.lukasz.edgeexamplers)
#pragma rs_fp_relaxed
#include "rs_debug.rsh"
static rs_allocation raw, magnitude, blurred, direction, candidates;
static float low, high;
static const uint32_t zero = 0;
void set_blur_input(rs_allocation u8_buf) {
raw = u8_buf;
}
void set_compute_gradient_input(rs_allocation f_buf) {
blurred = f_buf;
}
void set_suppress_input(rs_allocation f_buf, rs_allocation i_buf) {
magnitude = f_buf;
direction = i_buf;
}
void set_hysteresis_input(rs_allocation i_buf) {
candidates = i_buf;
}
void set_thresholds(float l, float h) {
low = l;
high = h;
}
inline static float getElementAt_uchar_to_float(rs_allocation a, uint32_t x,
uint32_t y) {
return rsGetElementAt_uchar(a, x, y) / 255.0f;
}
static rs_allocation histogram;
void set_histogram(rs_allocation h) {
histogram = h;
}
uchar4 __attribute__((kernel)) addhisto(uchar in, uint32_t x, uint32_t y) {
int px = (x - 100) / 2;
if (px > -1 && px < 256) {
int v = log((float) rsGetElementAt_int(histogram, (uint32_t) px)) * 30;
int py = (400 - y);
if (py > -1 && v > py) {
in = 255;
}
if (py == -1) {
in = 255;
}
}
uchar4 out = { in, in, in, 255 };
return out;
}
uchar4 __attribute__((kernel)) copy(uchar in) {
uchar4 out = { in, in, in, 255 };
return out;
}
uchar4 __attribute__((kernel)) blend(uchar4 in, uint32_t x, uint32_t y) {
uchar r = rsGetElementAt_uchar(raw, x, y);
uchar4 out = { r, r, r, 255 };
return max(out, in);
}
float __attribute__((kernel)) blur(uint32_t x, uint32_t y) {
float pixel = 0;
pixel += 2 * getElementAt_uchar_to_float(raw, x - 2, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 1, y - 2);
pixel += 5 * getElementAt_uchar_to_float(raw, x, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 1, y - 2);
pixel += 2 * getElementAt_uchar_to_float(raw, x + 2, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 2, y - 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x - 1, y - 1);
pixel += 12 * getElementAt_uchar_to_float(raw, x, y - 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x + 1, y - 1);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 2, y - 1);
pixel += 5 * getElementAt_uchar_to_float(raw, x - 2, y);
pixel += 12 * getElementAt_uchar_to_float(raw, x - 1, y);
pixel += 15 * getElementAt_uchar_to_float(raw, x, y);
pixel += 12 * getElementAt_uchar_to_float(raw, x + 1, y);
pixel += 5 * getElementAt_uchar_to_float(raw, x + 2, y);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 2, y + 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x - 1, y + 1);
pixel += 12 * getElementAt_uchar_to_float(raw, x, y + 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x + 1, y + 1);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 2, y + 1);
pixel += 2 * getElementAt_uchar_to_float(raw, x - 2, y + 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 1, y + 2);
pixel += 5 * getElementAt_uchar_to_float(raw, x, y + 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 1, y + 2);
pixel += 2 * getElementAt_uchar_to_float(raw, x + 2, y + 2);
pixel /= 159;
return pixel;
}
float __attribute__((kernel)) compute_gradient(uint32_t x, uint32_t y) {
float gx = 0;
gx -= rsGetElementAt_float(blurred, x - 1, y - 1);
gx -= rsGetElementAt_float(blurred, x - 1, y) * 2;
gx -= rsGetElementAt_float(blurred, x - 1, y + 1);
gx += rsGetElementAt_float(blurred, x + 1, y - 1);
gx += rsGetElementAt_float(blurred, x + 1, y) * 2;
gx += rsGetElementAt_float(blurred, x + 1, y + 1);
float gy = 0;
gy += rsGetElementAt_float(blurred, x - 1, y - 1);
gy += rsGetElementAt_float(blurred, x, y - 1) * 2;
gy += rsGetElementAt_float(blurred, x + 1, y - 1);
gy -= rsGetElementAt_float(blurred, x - 1, y + 1);
gy -= rsGetElementAt_float(blurred, x, y + 1) * 2;
gy -= rsGetElementAt_float(blurred, x + 1, y + 1);
int d = ((int) round(atan2pi(gy, gx) * 4.0f) + 4) % 4;
rsSetElementAt_int(direction, d, x, y);
return hypot(gx, gy);
}
int __attribute__((kernel)) suppress(uint32_t x, uint32_t y) {
int d = rsGetElementAt_int(direction, x, y);
float g = rsGetElementAt_float(magnitude, x, y);
if (d == 0) {
// horizontal, check left and right
float a = rsGetElementAt_float(magnitude, x - 1, y);
float b = rsGetElementAt_float(magnitude, x + 1, y);
return a < g && b < g ? 1 : 0;
} else if (d == 2) {
// vertical, check above and below
float a = rsGetElementAt_float(magnitude, x, y - 1);
float b = rsGetElementAt_float(magnitude, x, y + 1);
return a < g && b < g ? 1 : 0;
} else if (d == 1) {
// NW-SE
float a = rsGetElementAt_float(magnitude, x - 1, y - 1);
float b = rsGetElementAt_float(magnitude, x + 1, y + 1);
return a < g && b < g ? 1 : 0;
} else {
// NE-SW
float a = rsGetElementAt_float(magnitude, x + 1, y - 1);
float b = rsGetElementAt_float(magnitude, x - 1, y + 1);
return a < g && b < g ? 1 : 0;
}
}
static const int NON_EDGE = 0b000;
static const int LOW_EDGE = 0b001;
static const int MED_EDGE = 0b010;
static const int HIG_EDGE = 0b100;
inline static int getEdgeType(uint32_t x, uint32_t y) {
int e = rsGetElementAt_int(candidates, x, y);
float g = rsGetElementAt_float(magnitude, x, y);
if (e == 1) {
if (g < low)
return LOW_EDGE;
if (g > high)
return HIG_EDGE;
return MED_EDGE;
}
return NON_EDGE;
}
uchar4 __attribute__((kernel)) hysteresis(uint32_t x, uint32_t y) {
uchar4 white = { 255, 255, 255, 255 };
uchar4 red = { 255, 0, 0, 255 };
uchar4 black = { 0, 0, 0, 255 };
int type = getEdgeType(x, y);
if (type) {
if (type & LOW_EDGE) {
return black;
}
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
// it's medium, check nearest neighbours
type = getEdgeType(x - 1, y - 1);
type |= getEdgeType(x, y - 1);
type |= getEdgeType(x + 1, y - 1);
type |= getEdgeType(x - 1, y);
type |= getEdgeType(x + 1, y);
type |= getEdgeType(x - 1, y + 1);
type |= getEdgeType(x, y + 1);
type |= getEdgeType(x + 1, y + 1);
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
if (type & MED_EDGE) {
// check further
type = getEdgeType(x - 2, y - 2);
type |= getEdgeType(x - 1, y - 2);
type |= getEdgeType(x, y - 2);
type |= getEdgeType(x + 1, y - 2);
type |= getEdgeType(x + 2, y - 2);
type |= getEdgeType(x - 2, y - 1);
type |= getEdgeType(x + 2, y - 1);
type |= getEdgeType(x - 2, y);
type |= getEdgeType(x + 2, y);
type |= getEdgeType(x - 2, y + 1);
type |= getEdgeType(x + 2, y + 1);
type |= getEdgeType(x - 2, y + 2);
type |= getEdgeType(x - 1, y + 2);
type |= getEdgeType(x, y + 2);
type |= getEdgeType(x + 1, y + 2);
type |= getEdgeType(x + 2, y + 2);
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
}
}
return black;
}
After some debugging I found that:
uchar4 __attribute__((kernel)) hysteresis(uint32_t x, uint32_t y) {...}
returns white and black pixels so renderscript works properly I think.
Output is the same type as my previous renderscript filters (uchar4) which I assign to Bitmap with success.
I have no idea what I've done wrong.
Also my logcat prints:
V/RenderScript_jni: RS compat mode
V/RenderScript_jni: Unable to load libRSSupportIO.so, USAGE_IO not supported
V/RenderScript_jni: Unable to load BLAS lib, ONLY BNNM will be supported: java.lang.UnsatisfiedLinkError: Couldn't load blasV8 from loader dalvik.system.PathClassLoader[dexPath=/data/app/com.lukasz.edgeexamplers-20.apk,libraryPath=/data/app-lib/com.lukasz.edgeexamplers-20]: findLibrary returned null
E/RenderScript: Couldn't load libRSSupportIO.so
in every program which use renderscript, but other programs works even with this warnings.
Update #1
As #Stephen Hines mention, there was issue with reading out of bounds. I think I fixed it for now (without messing with renderscript) by changing those lines:
edgeFilter.forEach_blur(allocationBlurred);
edgeFilter.forEach_compute_gradient(allocationMagnitude);
edgeFilter.forEach_suppress(allocationEdge);
edgeFilter.forEach_hysteresis(allocationOut);
into:
Script.LaunchOptions sLaunchOpt = new Script.LaunchOptions();
sLaunchOpt.setX(2, width - 3);
sLaunchOpt.setY(2, height - 3);
edgeFilter.forEach_blur(allocationBlurred, sLaunchOpt);
edgeFilter.forEach_compute_gradient(allocationMagnitude, sLaunchOpt);
edgeFilter.forEach_suppress(allocationEdge, sLaunchOpt);
edgeFilter.forEach_hysteresis(allocationOut, sLaunchOpt);
But my problem is still not solved. Output is black as earlier.
I am trying to apply Sepia effect on an Image in Blackberry.
I have tried it but doesn't get the 100% sepia effect.
This is code that I have tried for sepia effect.
I have used getARGB() and setARGB() methods of bitmap class.
public Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity=30;//value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// WritableRaster raster = img.getRaster();
// We need 3 integers (for R,G,B color values) per pixel.
int[] pixels = new int[w*h*3];
// raster.getPixels(0, 0, w, h, pixels);
bitmap.getARGB(pixels, 0, w, x, y, w, h);
// Process 3 ints at a time for each pixel.
// Each pixel has 3 RGB colors in array
for (int i=0;i<pixels.length; i+=3) {
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r>255) r=255;
if (g>255) g=255;
if (b>255) b=255;
// Darken blue color to increase sepia effect
b-= sepiaIntensity;
// normalize if out of bounds
if (b<0) {
b=0;
}
if (b>255) {
b=255;
}
pixels[i] = r;
pixels[i+1]= g;
pixels[i+2] = b;
}
//raster.setPixels(0, 0, w, h, pixels);
bitmap.setARGB(pixels, 0, w, 0, 0, w, h);
return bitmap;
}
This call:
bitmap.getARGB(pixels, 0, w, x, y, w, h);
returns an int[] array where each int represents a color in the format 0xAARRGGBB. This differs from you previous code using JavaSE's Raster class.
EDIT: The method fixed for BlackBerry:
public static Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity = 30;// value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Unlike JavaSE's Raster, we need an int per pixel
int[] pixels = new int[w * h];
// We get the whole image
bitmap.getARGB(pixels, 0, w, 0, 0, w, h);
// Process each pixel component. A pixel comes in the format 0xAARRGGBB.
for (int i = 0; i < pixels.length; i++) {
int r = (pixels[i] >> 16) & 0xFF;
int g = (pixels[i] >> 8) & 0xFF;
int b = pixels[i] & 0xFF;
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r > 255)
r = 255;
if (g > 255)
g = 255;
if (b > 255)
b = 255;
// Darken blue color to increase sepia effect
b -= sepiaIntensity;
// normalize if out of bounds
if (b < 0) {
b = 0;
}
if (b > 255) {
b = 255;
}
// Now we compose a new pixel with the modified channels,
// and an alpha value of 0xFF (full opaque)
pixels[i] = ((r << 16) & 0xFF0000) | ((g << 8) & 0x00FF00) | (b & 0xFF) | 0xFF000000;
}
// We return a new Bitmap. Trying to modify the one passed as parameter
// could throw an exception, since in BlackBerry not all Bitmaps are modifiable.
Bitmap ret = new Bitmap(w, h);
ret.setARGB(pixels, 0, w, 0, 0, w, h);
return ret;
}