Monochrome and Negative effect on image for blackberry - blackberry

I am new for blackberry app development.just a week before I start BB development.I had a project on image effects and controll. Now I am facing problems in converting images to negative and monochrome effetcs.
I have tried the following code for negative effetc. I have got the output for negative image.But I dont get image background right... It shows me blue background.....
final static double GS_RED = 0.299;//globally declared
final static double GS_GREEN = 0.587;
final static double GS_BLUE = 0.114;
public Bitmap changeToNegativeEffect(Bitmap bitmap) {
int[] argbData = new int[bitmap.getWidth() * bitmap.getHeight()];
int[] newargb = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getARGB(argbData, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for ( int i = argbData.length-1; i >= 0; i--)
{
A= 255 -argbData[i] >> 24;
R= 255 -argbData[i] >> 16 & 0xFF;
G= 255 -argbData[i] >> 8 & 0xFF;
B= 255 -argbData[i] & 0xFF;
//R = G = B = (int)(GS_RED * R + GS_GREEN * G + GS_BLUE * B);
//int composite=(A << 24) | (R << 16) | (G << 8) | B;
argbData[i] = (0xff000000 | R<< 16 | G << 8 | B );
newargb[i] = argbData[i];
}
bitmap.setARGB(newargb, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
return bitmap;
}

Related

How do I convert ByteArray from ImageMetaData() to Bitmap?

I have this code:
Frame frame = mSession.update();
Camera camera = frame.getCamera();
...
bytes=frame.getImageMetadata().getByteArray(0);
System.out.println("Byte Array "+frame.getImageMetadata().getByteArray(0));
Bitmap bmp = BitmapFactory.decodeByteArray(bytes,0,bytes.length);
System.out.println(bmp);
When I print Bitmap, I get a null object. I'm trying to get the image from the camera, that's the reason I'm trying to convert byteArray to Bitmap. If there's an alternative way, it would also be helpful.
Thank You.
The ImageMetaData describes the background image, but does not actually contain the image itself.
If you want to capture the background image as a Bitmap, you should look at the computervision sample which uses a FrameBufferObject to copy the image to a byte array.
I've tried something similar. It works. But I don't recommend anyone to try this way. It takes time because of nested loops.
CameraImageBuffer inputImage;
final Bitmap bmp = Bitmap.createBitmap(inputImage.width, inputImage.height, Bitmap.Config.ARGB_8888);
int width = inputImage.width;
int height = inputImage.height;
int frameSize = width*height;
// Write Bytebuffer to byte[]
byte[] imageBuffer= new byte[inputImage.buffer.remaining()];
inputImage.buffer.get(imageBuffer);
int[] rgba = new int[frameSize];
for (int i = 0; i < height; i++){
for (int j = 0; j < width; j++) {
int r =imageBuffer[(i * width + j)*4 + 0];
int g =imageBuffer[(i * width + j)*4 + 1];
int b =imageBuffer[(i * width + j)*4 + 2];
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
}
bmp.setPixels(rgba, 0, width , 0, 0, width, height);
Bytebuffer is converted to rgba buffer, and is written to Bitmap. CameraImageBuffer is the class provided in computervision sample app.
You may not able to get bitmap using image metadata. Use below approach.Use onDrawFrame override method of surface view render.
#Override public void onDrawFrame(GL10 gl) {
int w = 1080;
int h = 1080;
int b[] = new int[w * (0 + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap mBitmap = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
runOnUiThread(new Runnable() {
#Override public void run() {
image_test.setImageBitmap(resizedBitmap);
}
});
}

RenderScript's allocation output returns a black Bitmap

few days ago I've just started learning RenderScript. I managed to create some simple image processing filters e.g. grayscale, color change.
Now I'm working on Canny edge filters with no success.
Question: Why ImageView displays black image and how to solve it?
I'am using implementation of Canny egde filter made by arekolek github
optional: Can I compute it faster?
I ended with all code wrote in on method "runEdgeFilter(...)" which runs when i clicked image on my device, to make sure I'am not messing with imageView in other place. Code that i use so far.
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.support.v8.renderscript.*;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.ImageView;
public class MainActivity extends AppCompatActivity {
private static final float THRESHOLD_MULT_LOW = 0.66f * 0.00390625f;
private static final float THRESHOLD_MULT_HIGH = 1.33f * 0.00390625f;
private ImageView imageView;
private Bitmap img;
private boolean setThresholds = true;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
imageView = (ImageView) findViewById(R.id.imageView);
img = BitmapFactory.decodeResource(getResources(), R.drawable.test_img_no_dpi2);
imageView.setImageBitmap(img);
}
public void imageClicked(View view) {
runEdgeFilter(img, this);
}
private void runEdgeFilter(Bitmap image, Context context) {
int width = image.getWidth();
int height = image.getHeight();
RenderScript rs = RenderScript.create(context);
Allocation allocationIn = Allocation.createFromBitmap(rs, image);
Type.Builder tb;
tb = new Type.Builder(rs, Element.F32(rs)).setX(width).setY(height);
Allocation allocationBlurred = Allocation.createTyped(rs, tb.create());
Allocation allocationMagnitude = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.I32(rs)).setX(width).setY(height);
Allocation allocationDirection = Allocation.createTyped(rs, tb.create());
Allocation allocationEdge = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.I32(rs)).setX(256);
Allocation allocationHistogram = Allocation.createTyped(rs, tb.create());
tb = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation allocationOut = Allocation.createTyped(rs, tb.create());
ScriptC_edge edgeFilter = new ScriptC_edge(rs);
ScriptIntrinsicHistogram histogram = ScriptIntrinsicHistogram.create(rs, Element.U8(rs));
histogram.setOutput(allocationHistogram);
edgeFilter.invoke_set_histogram(allocationHistogram);
edgeFilter.invoke_set_blur_input(allocationIn);
edgeFilter.invoke_set_compute_gradient_input(allocationBlurred);
edgeFilter.invoke_set_suppress_input(allocationMagnitude, allocationDirection);
edgeFilter.invoke_set_hysteresis_input(allocationEdge);
edgeFilter.invoke_set_thresholds(0.2f, 0.6f);
histogram.forEach_Dot(allocationIn);
int[] histogramOutput = new int[256];
allocationHistogram.copyTo(histogramOutput);
if(setThresholds) {
int median = width * height / 2;
for (int i = 0; i < 256; ++i) {
median -= histogramOutput[i];
if (median < 1) {
edgeFilter.invoke_set_thresholds(i * THRESHOLD_MULT_LOW, i * THRESHOLD_MULT_HIGH);
break;
}
}
}
edgeFilter.forEach_blur(allocationBlurred);
edgeFilter.forEach_compute_gradient(allocationMagnitude);
edgeFilter.forEach_suppress(allocationEdge);
edgeFilter.forEach_hysteresis(allocationOut);
allocationOut.copyTo(image);
allocationIn.destroy();
allocationMagnitude.destroy();
allocationBlurred.destroy();
allocationDirection.destroy();
allocationEdge.destroy();
allocationHistogram.destroy();
allocationOut.destroy();
histogram.destroy();
edgeFilter.destroy();
rs.destroy();
imageView.setImageBitmap(image);
}
}
renderscript edge.rs:
#pragma version(1)
#pragma rs java_package_name(com.lukasz.edgeexamplers)
#pragma rs_fp_relaxed
#include "rs_debug.rsh"
static rs_allocation raw, magnitude, blurred, direction, candidates;
static float low, high;
static const uint32_t zero = 0;
void set_blur_input(rs_allocation u8_buf) {
raw = u8_buf;
}
void set_compute_gradient_input(rs_allocation f_buf) {
blurred = f_buf;
}
void set_suppress_input(rs_allocation f_buf, rs_allocation i_buf) {
magnitude = f_buf;
direction = i_buf;
}
void set_hysteresis_input(rs_allocation i_buf) {
candidates = i_buf;
}
void set_thresholds(float l, float h) {
low = l;
high = h;
}
inline static float getElementAt_uchar_to_float(rs_allocation a, uint32_t x,
uint32_t y) {
return rsGetElementAt_uchar(a, x, y) / 255.0f;
}
static rs_allocation histogram;
void set_histogram(rs_allocation h) {
histogram = h;
}
uchar4 __attribute__((kernel)) addhisto(uchar in, uint32_t x, uint32_t y) {
int px = (x - 100) / 2;
if (px > -1 && px < 256) {
int v = log((float) rsGetElementAt_int(histogram, (uint32_t) px)) * 30;
int py = (400 - y);
if (py > -1 && v > py) {
in = 255;
}
if (py == -1) {
in = 255;
}
}
uchar4 out = { in, in, in, 255 };
return out;
}
uchar4 __attribute__((kernel)) copy(uchar in) {
uchar4 out = { in, in, in, 255 };
return out;
}
uchar4 __attribute__((kernel)) blend(uchar4 in, uint32_t x, uint32_t y) {
uchar r = rsGetElementAt_uchar(raw, x, y);
uchar4 out = { r, r, r, 255 };
return max(out, in);
}
float __attribute__((kernel)) blur(uint32_t x, uint32_t y) {
float pixel = 0;
pixel += 2 * getElementAt_uchar_to_float(raw, x - 2, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 1, y - 2);
pixel += 5 * getElementAt_uchar_to_float(raw, x, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 1, y - 2);
pixel += 2 * getElementAt_uchar_to_float(raw, x + 2, y - 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 2, y - 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x - 1, y - 1);
pixel += 12 * getElementAt_uchar_to_float(raw, x, y - 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x + 1, y - 1);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 2, y - 1);
pixel += 5 * getElementAt_uchar_to_float(raw, x - 2, y);
pixel += 12 * getElementAt_uchar_to_float(raw, x - 1, y);
pixel += 15 * getElementAt_uchar_to_float(raw, x, y);
pixel += 12 * getElementAt_uchar_to_float(raw, x + 1, y);
pixel += 5 * getElementAt_uchar_to_float(raw, x + 2, y);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 2, y + 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x - 1, y + 1);
pixel += 12 * getElementAt_uchar_to_float(raw, x, y + 1);
pixel += 9 * getElementAt_uchar_to_float(raw, x + 1, y + 1);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 2, y + 1);
pixel += 2 * getElementAt_uchar_to_float(raw, x - 2, y + 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x - 1, y + 2);
pixel += 5 * getElementAt_uchar_to_float(raw, x, y + 2);
pixel += 4 * getElementAt_uchar_to_float(raw, x + 1, y + 2);
pixel += 2 * getElementAt_uchar_to_float(raw, x + 2, y + 2);
pixel /= 159;
return pixel;
}
float __attribute__((kernel)) compute_gradient(uint32_t x, uint32_t y) {
float gx = 0;
gx -= rsGetElementAt_float(blurred, x - 1, y - 1);
gx -= rsGetElementAt_float(blurred, x - 1, y) * 2;
gx -= rsGetElementAt_float(blurred, x - 1, y + 1);
gx += rsGetElementAt_float(blurred, x + 1, y - 1);
gx += rsGetElementAt_float(blurred, x + 1, y) * 2;
gx += rsGetElementAt_float(blurred, x + 1, y + 1);
float gy = 0;
gy += rsGetElementAt_float(blurred, x - 1, y - 1);
gy += rsGetElementAt_float(blurred, x, y - 1) * 2;
gy += rsGetElementAt_float(blurred, x + 1, y - 1);
gy -= rsGetElementAt_float(blurred, x - 1, y + 1);
gy -= rsGetElementAt_float(blurred, x, y + 1) * 2;
gy -= rsGetElementAt_float(blurred, x + 1, y + 1);
int d = ((int) round(atan2pi(gy, gx) * 4.0f) + 4) % 4;
rsSetElementAt_int(direction, d, x, y);
return hypot(gx, gy);
}
int __attribute__((kernel)) suppress(uint32_t x, uint32_t y) {
int d = rsGetElementAt_int(direction, x, y);
float g = rsGetElementAt_float(magnitude, x, y);
if (d == 0) {
// horizontal, check left and right
float a = rsGetElementAt_float(magnitude, x - 1, y);
float b = rsGetElementAt_float(magnitude, x + 1, y);
return a < g && b < g ? 1 : 0;
} else if (d == 2) {
// vertical, check above and below
float a = rsGetElementAt_float(magnitude, x, y - 1);
float b = rsGetElementAt_float(magnitude, x, y + 1);
return a < g && b < g ? 1 : 0;
} else if (d == 1) {
// NW-SE
float a = rsGetElementAt_float(magnitude, x - 1, y - 1);
float b = rsGetElementAt_float(magnitude, x + 1, y + 1);
return a < g && b < g ? 1 : 0;
} else {
// NE-SW
float a = rsGetElementAt_float(magnitude, x + 1, y - 1);
float b = rsGetElementAt_float(magnitude, x - 1, y + 1);
return a < g && b < g ? 1 : 0;
}
}
static const int NON_EDGE = 0b000;
static const int LOW_EDGE = 0b001;
static const int MED_EDGE = 0b010;
static const int HIG_EDGE = 0b100;
inline static int getEdgeType(uint32_t x, uint32_t y) {
int e = rsGetElementAt_int(candidates, x, y);
float g = rsGetElementAt_float(magnitude, x, y);
if (e == 1) {
if (g < low)
return LOW_EDGE;
if (g > high)
return HIG_EDGE;
return MED_EDGE;
}
return NON_EDGE;
}
uchar4 __attribute__((kernel)) hysteresis(uint32_t x, uint32_t y) {
uchar4 white = { 255, 255, 255, 255 };
uchar4 red = { 255, 0, 0, 255 };
uchar4 black = { 0, 0, 0, 255 };
int type = getEdgeType(x, y);
if (type) {
if (type & LOW_EDGE) {
return black;
}
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
// it's medium, check nearest neighbours
type = getEdgeType(x - 1, y - 1);
type |= getEdgeType(x, y - 1);
type |= getEdgeType(x + 1, y - 1);
type |= getEdgeType(x - 1, y);
type |= getEdgeType(x + 1, y);
type |= getEdgeType(x - 1, y + 1);
type |= getEdgeType(x, y + 1);
type |= getEdgeType(x + 1, y + 1);
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
if (type & MED_EDGE) {
// check further
type = getEdgeType(x - 2, y - 2);
type |= getEdgeType(x - 1, y - 2);
type |= getEdgeType(x, y - 2);
type |= getEdgeType(x + 1, y - 2);
type |= getEdgeType(x + 2, y - 2);
type |= getEdgeType(x - 2, y - 1);
type |= getEdgeType(x + 2, y - 1);
type |= getEdgeType(x - 2, y);
type |= getEdgeType(x + 2, y);
type |= getEdgeType(x - 2, y + 1);
type |= getEdgeType(x + 2, y + 1);
type |= getEdgeType(x - 2, y + 2);
type |= getEdgeType(x - 1, y + 2);
type |= getEdgeType(x, y + 2);
type |= getEdgeType(x + 1, y + 2);
type |= getEdgeType(x + 2, y + 2);
if (type & HIG_EDGE) {
//rsDebug("wh : x=", x);
//rsDebug("wh : y=", y);
return white;
}
}
}
return black;
}
After some debugging I found that:
uchar4 __attribute__((kernel)) hysteresis(uint32_t x, uint32_t y) {...}
returns white and black pixels so renderscript works properly I think.
Output is the same type as my previous renderscript filters (uchar4) which I assign to Bitmap with success.
I have no idea what I've done wrong.
Also my logcat prints:
V/RenderScript_jni: RS compat mode
V/RenderScript_jni: Unable to load libRSSupportIO.so, USAGE_IO not supported
V/RenderScript_jni: Unable to load BLAS lib, ONLY BNNM will be supported: java.lang.UnsatisfiedLinkError: Couldn't load blasV8 from loader dalvik.system.PathClassLoader[dexPath=/data/app/com.lukasz.edgeexamplers-20.apk,libraryPath=/data/app-lib/com.lukasz.edgeexamplers-20]: findLibrary returned null
E/RenderScript: Couldn't load libRSSupportIO.so
in every program which use renderscript, but other programs works even with this warnings.
Update #1
As #Stephen Hines mention, there was issue with reading out of bounds. I think I fixed it for now (without messing with renderscript) by changing those lines:
edgeFilter.forEach_blur(allocationBlurred);
edgeFilter.forEach_compute_gradient(allocationMagnitude);
edgeFilter.forEach_suppress(allocationEdge);
edgeFilter.forEach_hysteresis(allocationOut);
into:
Script.LaunchOptions sLaunchOpt = new Script.LaunchOptions();
sLaunchOpt.setX(2, width - 3);
sLaunchOpt.setY(2, height - 3);
edgeFilter.forEach_blur(allocationBlurred, sLaunchOpt);
edgeFilter.forEach_compute_gradient(allocationMagnitude, sLaunchOpt);
edgeFilter.forEach_suppress(allocationEdge, sLaunchOpt);
edgeFilter.forEach_hysteresis(allocationOut, sLaunchOpt);
But my problem is still not solved. Output is black as earlier.

Sepia Image effect in Blackberry

I am trying to apply Sepia effect on an Image in Blackberry.
I have tried it but doesn't get the 100% sepia effect.
This is code that I have tried for sepia effect.
I have used getARGB() and setARGB() methods of bitmap class.
public Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity=30;//value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// WritableRaster raster = img.getRaster();
// We need 3 integers (for R,G,B color values) per pixel.
int[] pixels = new int[w*h*3];
// raster.getPixels(0, 0, w, h, pixels);
bitmap.getARGB(pixels, 0, w, x, y, w, h);
// Process 3 ints at a time for each pixel.
// Each pixel has 3 RGB colors in array
for (int i=0;i<pixels.length; i+=3) {
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r>255) r=255;
if (g>255) g=255;
if (b>255) b=255;
// Darken blue color to increase sepia effect
b-= sepiaIntensity;
// normalize if out of bounds
if (b<0) {
b=0;
}
if (b>255) {
b=255;
}
pixels[i] = r;
pixels[i+1]= g;
pixels[i+2] = b;
}
//raster.setPixels(0, 0, w, h, pixels);
bitmap.setARGB(pixels, 0, w, 0, 0, w, h);
return bitmap;
}
This call:
bitmap.getARGB(pixels, 0, w, x, y, w, h);
returns an int[] array where each int represents a color in the format 0xAARRGGBB. This differs from you previous code using JavaSE's Raster class.
EDIT: The method fixed for BlackBerry:
public static Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity = 30;// value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Unlike JavaSE's Raster, we need an int per pixel
int[] pixels = new int[w * h];
// We get the whole image
bitmap.getARGB(pixels, 0, w, 0, 0, w, h);
// Process each pixel component. A pixel comes in the format 0xAARRGGBB.
for (int i = 0; i < pixels.length; i++) {
int r = (pixels[i] >> 16) & 0xFF;
int g = (pixels[i] >> 8) & 0xFF;
int b = pixels[i] & 0xFF;
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r > 255)
r = 255;
if (g > 255)
g = 255;
if (b > 255)
b = 255;
// Darken blue color to increase sepia effect
b -= sepiaIntensity;
// normalize if out of bounds
if (b < 0) {
b = 0;
}
if (b > 255) {
b = 255;
}
// Now we compose a new pixel with the modified channels,
// and an alpha value of 0xFF (full opaque)
pixels[i] = ((r << 16) & 0xFF0000) | ((g << 8) & 0x00FF00) | (b & 0xFF) | 0xFF000000;
}
// We return a new Bitmap. Trying to modify the one passed as parameter
// could throw an exception, since in BlackBerry not all Bitmaps are modifiable.
Bitmap ret = new Bitmap(w, h);
ret.setARGB(pixels, 0, w, 0, 0, w, h);
return ret;
}

Convert a Bitmap image to grayscale within BlackBerry J2Me

I have been trying to use the samples from here:
J2ME: Convert transparent PNG image to grayscale
and here:
http://www.java2s.com/Code/Java/Collections-Data-Structure/intarraytobytearray.htm
to convert an Bitmap image object to grayscale on the fly but I am running into issues when I am trying to re-encode my byte to an image and I get the following error/stack:
(Suspended (exception IllegalArgumentException))
EncodedImage.createEncodedImage(byte[], int, int, String) line: 367
EncodedImage.createEncodedImage(byte[], int, int) line: 279
ScreenTemp.getGrayScaleImage(Bitmap) line: 404
Here is my code I am trying:
Bitmap btemp = getGrayScaleImage(Bitmap.getBitmapResource("add.png"));
BitmapField bftemp = new BitmapField(btemp, BitmapField.FOCUSABLE | BitmapField.FIELD_HCENTER | BitmapField.FIELD_VCENTER);
add(bftemp);
public Bitmap getGrayScaleImage(Bitmap image) {
int width = image.getWidth();
int height = image.getHeight();
int[] rgbData = new int[width * height];
image.getARGB(rgbData, 0, width, 0, 0, width, height);
for (int x = 0; x < width*height ; x++) {
rgbData[x] = getGrayScale(rgbData[x]);
}
byte[] b = int2byte(rgbData);
final EncodedImage jpegPic = EncodedImage.createEncodedImage(b, 0, b.length);
return jpegPic.getBitmap();
}
private int getGrayScale(int c) {
int[] p = new int[4];
p[0] = (int) ((c & 0xFF000000) >>> 24); // Opacity level
p[1] = (int) ((c & 0x00FF0000) >>> 16); // Red level
p[2] = (int) ((c & 0x0000FF00) >>> 8); // Green level
p[3] = (int) (c & 0x000000FF); // Blue level
int nc = p[1] / 3 + p[2] / 3 + p[3] / 3;
// a little bit brighter
nc = nc / 2 + 127;
p[1] = nc;
p[2] = nc;
p[3] = nc;
int gc = (p[0] << 24 | p[1] << 16 | p[2] << 8 | p[3]);
return gc;
}
private static byte[] int2byte(int[] src) {
int srcLength = src.length;
byte[]dst = new byte[srcLength << 2];
for (int i=0; i<srcLength; i++) {
int x = src[i];
int j = i << 2;
dst[j++] = (byte) ((x >>> 0) & 0xff);
dst[j++] = (byte) ((x >>> 8) & 0xff);
dst[j++] = (byte) ((x >>> 16) & 0xff);
dst[j++] = (byte) ((x >>> 24) & 0xff);
}
return dst;
}
Any help would be great!
Thanks,
Justin
EDIT:
Thanks to the below information I was able to fix this issue. Here is the code. You no longer need the int2byte and here is the updated the getGrayScaleImage method:
public Bitmap getGrayScaleImage(Bitmap image) {
int width = image.getWidth();
int height = image.getHeight();
int[] rgbData = new int[width * height];
image.getARGB(rgbData, 0, width, 0, 0, width, height);
for (int x = 0; x < width*height ; x++) {
rgbData[x] = getGrayScale(rgbData[x]);
}
byte[] b = int2byte(rgbData);
Bitmap bit = new Bitmap(width, height);
bit.setARGB(rgbData, 0, width, 0, 0, width, height);
return bit;
}
Quoting from the EncodedImage javadoc:
If the image format is not recognized, an IllegalArgumentException is thrown.
Why are you fiddling with EncodedImage? It seems like you ought to be able to just create a second Bitmap and use setARGB().
To extend Scott W answer.
EncodedImage.createEncodedImage(byte[] data, int offset, int length) expects a byte array of a supported image type (TIFF, BMP, JPEG, GIF, WBMP or PNG). For instance, if you opened a JPEG image file, read the file bytes, then it would be possible to use the got bytes to create an EncodedImage (it would be JPEGEncodedImage actually).
So as Scott W says you should use Bitmap.setARGB() for the resulting byte array to have a Bitmap with converted data.
And then if you need to save the image as a JPEG file, you can use smth like this:
JPEGEncodedImage eImage = JPEGEncodedImage.encode(bitmap, 75);
byte[] fileData = eImage.getData();
// open a FileConnection and write the fileData

Fast RGB => YUV conversion in OpenCL

I know the following formula can be used to convert RGB images to YUV images. In the following formula, R, G, B, Y, U, V are all 8-bit unsigned integers, and intermediate values are 16-bit unsigned integers.
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128
But when the formula is used in OpenCL it's a different story.
1. 8-bit memory write access is an optional extension, which means some OpenCL implementations may not support it.
2. even the above extension is supported, it's deadly slow compared with 32-bit write access.
In order to get better performance, every 4 pixels will be processed at the same time, so the input is 12 8-bit integers and the output is 3 32-bit unsigned integers(the first one stands for 4 Y samples, the second one stands for 4 U samples, the last one stands for 4 V samples).
My question is how to get these 3 32-bit integers directly from the 12 8-bit integers? Is there a formula to get these 3 32-bit integers, or I just need to use the old formula to get 12 8-bit integer results(4 Y, 4 U, 4 V) and construct the 3 32-bit integers with bit-wise operation?
Even though this question was asked 2 years ago, i think some working code would help here. In terms of the initial concerns about bad performance when directly accessing 8-bit values, it's better to perform 32-bit direct access when possible.
Some time ago I've developed and used the following OpenCL kernel to convert ARGB (typical windows bitmap pixel layout) to the y-plane (full sized), u/v-half-plane (quarter sized) memory layout as input for libx264 encoding.
__kernel void ARGB2YUV (
__global unsigned int * sourceImage,
__global unsigned int * destImage,
unsigned int srcHeight,
unsigned int srcWidth,
unsigned int yuvStride // must be srcWidth/4 since we pack 4 pixels into 1 Y-unit (with 4 y-pixels)
)
{
int i,j;
unsigned int RGBs [ 4 ];
unsigned int posSrc, RGB, Value4 = 0, Value, yuvStrideHalf, srcHeightHalf, yPlaneOffset, posOffset;
unsigned char red, green, blue;
unsigned int posX = get_global_id(0);
unsigned int posY = get_global_id(1);
if ( posX < yuvStride ) {
// Y plane - pack 4 y's within each work item
if ( posY >= srcHeight )
return;
posSrc = (posY * srcWidth) + (posX * 4);
RGBs [ 0 ] = sourceImage [ posSrc ];
RGBs [ 1 ] = sourceImage [ posSrc + 1 ];
RGBs [ 2 ] = sourceImage [ posSrc + 2 ];
RGBs [ 3 ] = sourceImage [ posSrc + 3 ];
for ( i=0; i<4; i++ ) {
RGB = RGBs [ i ];
blue = RGB & 0xff; green = (RGB >> 8) & 0xff; red = (RGB >> 16) & 0xff;
Value = ( ( 66 * red + 129 * green + 25 * blue ) >> 8 ) + 16;
Value4 |= (Value << (i * 8));
}
destImage [ (posY * yuvStride) + posX ] = Value4;
return;
}
posX -= yuvStride;
yuvStrideHalf = yuvStride >> 1;
// U plane - pack 4 u's within each work item
if ( posX >= yuvStrideHalf )
return;
srcHeightHalf = srcHeight >> 1;
if ( posY < srcHeightHalf ) {
posSrc = ((posY * 2) * srcWidth) + (posX * 8);
RGBs [ 0 ] = sourceImage [ posSrc ];
RGBs [ 1 ] = sourceImage [ posSrc + 2 ];
RGBs [ 2 ] = sourceImage [ posSrc + 4 ];
RGBs [ 3 ] = sourceImage [ posSrc + 6 ];
for ( i=0; i<4; i++ ) {
RGB = RGBs [ i ];
blue = RGB & 0xff; green = (RGB >> 8) & 0xff; red = (RGB >> 16) & 0xff;
Value = ( ( -38 * red + -74 * green + 112 * blue ) >> 8 ) + 128;
Value4 |= (Value << (i * 8));
}
yPlaneOffset = yuvStride * srcHeight;
posOffset = (posY * yuvStrideHalf) + posX;
destImage [ yPlaneOffset + posOffset ] = Value4;
return;
}
posY -= srcHeightHalf;
if ( posY >= srcHeightHalf )
return;
// V plane - pack 4 v's within each work item
posSrc = ((posY * 2) * srcWidth) + (posX * 8);
RGBs [ 0 ] = sourceImage [ posSrc ];
RGBs [ 1 ] = sourceImage [ posSrc + 2 ];
RGBs [ 2 ] = sourceImage [ posSrc + 4 ];
RGBs [ 3 ] = sourceImage [ posSrc + 6 ];
for ( i=0; i<4; i++ ) {
RGB = RGBs [ i ];
blue = RGB & 0xff; green = (RGB >> 8) & 0xff; red = (RGB >> 16) & 0xff;
Value = ( ( 112 * red + -94 * green + -18 * blue ) >> 8 ) + 128;
Value4 |= (Value << (i * 8));
}
yPlaneOffset = yuvStride * srcHeight;
posOffset = (posY * yuvStrideHalf) + posX;
destImage [ yPlaneOffset + (yPlaneOffset >> 2) + posOffset ] = Value4;
return;
}
This code performs only global 32-bit memory access while 8-bit processing happens within each work item.
Oh.. and the proper code to invoke the kernel
unsigned int width = 1024;
unsigned int height = 768;
unsigned int frameSize = width * height;
const unsigned int argbSize = frameSize * 4; // ARGB pixels
const unsigned int yuvSize = frameSize + (frameSize >> 1); // Y,U,V planes
const unsigned int yuvStride = width >> 2; // since we pack 4 RGBs into "one" YYYY
// Allocates ARGB buffer
ocl_rgb_buffer = clCreateBuffer ( context, CL_MEM_READ_WRITE, argbSize, 0, &error );
// ... error handling ...
ocl_yuv_buffer = clCreateBuffer ( context, CL_MEM_READ_WRITE, yuvSize, 0, &error );
// ... error handling ...
error = clSetKernelArg ( kernel, 0, sizeof(cl_mem), &ocl_rgb_buffer );
error |= clSetKernelArg ( kernel, 1, sizeof(cl_mem), &ocl_yuv_buffer );
error |= clSetKernelArg ( kernel, 2, sizeof(unsigned int), &height);
error |= clSetKernelArg ( kernel, 3, sizeof(unsigned int), &width);
error |= clSetKernelArg ( kernel, 4, sizeof(unsigned int), &yuvStride);
// ... error handling ...
const size_t local_ws[] = { 16, 16 };
const size_t global_ws[] = { yuvStride + (yuvStride >> 1), height };
error = clEnqueueNDRangeKernel ( queue, kernel, 2, NULL, global_ws, local_ws, 0, NULL, NULL );
// ... error handling ...
Note: have a look at the work item calculations. Some additional code needs to be added (e.g. using mod so as to add sufficient spare items) to make sure that work item sizes fit to local work sizes.
Like this? Use int4 unless your platform can use int3. Also you can pack 5 pixels into an int16 so you are wasting 1/16 instead of 1/4 of the memory bandwidth.
__kernel void rgb2yuv( __global int3* input, __global int3* output){
rgb = input[get_global_id(0)];
R = rgb.x;
G = rgb.y;
B = rgb.z;
yuv.x = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
yuv.y = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
yuv.z = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
output[get_global_id(0)] = yuv;
}
Along with opencl specification data type int3 doesn't exists.
Page 123:
Supported values of n are 2, 4, 8, and 16...
In your kernel variables rgb, R, G, B, and yuv should be at least __private int4.
OpenCL 1.1 added support for typen where n = 3. However, I strongly recommend you don't use it. Different vendor implementations have different bugs, and it's not saving you anything.

Resources