In my project, there were some differences between work on Android and iOS. includeNativeBool is false.
For Example:
Code for this:
#Override
public void paint(Graphics g) {
x = getX();
y = getY();
w = getWidth();
h = getHeight();
r1 = w/20;
r2 = r1/2;
d1 = r1*2;
d2 = r2*2;
// Fill background
g.setColor(getStyle().getFgColor());
g.fillRect(x, y, w, h);
}
public void FirstPart(Graphics g) {
g.setColor(getStyle().getBgColor());
// North-West
g.fillArc(x-r1, y-r1, d1, d1, 270, 360);
// North
g.fillArc(x+w/2-r1, y-r1, d1, d1, 180, 360);
// North-East
g.fillArc(x+w-r1, y-r1, d1, d1, 90, 180);
}
public void MiddlePartBegin(Graphics g) {
g.setColor(getStyle().getBgColor());
// North-West
g.fillArc(x-r2, y-r2, d2, d2, 270, 360);
// North-East
g.fillArc(x+w-r2, y-r2, d2, d2, 90, 180);
}
public void MiddlePartEnd(Graphics g) {
if (dash != null) {
int c = w/iW + (w % iW > 0 ? 1 : 0); // Ceil
for (int i=0;i<c;i++) {
g.drawImage(dash, i*iW+x, y+h-1);
}
}
g.setColor(getStyle().getBgColor());
// South-West
g.fillArc(x-r2, y+h-r2, d2, d2, 270, 360);
// South-East
g.fillArc(x+w-r2, y+h-r2, d2, d2, 270, 360);
}
public void LastPart(Graphics g) {
g.setColor(getStyle().getBgColor());
// South-West
g.fillArc(x-r1, y+h-r1, d1, d1, 270, 360);
// South-East
g.fillArc(x+w-r1, y+h-r1, d1, d1, 270, 360);
}
Or this:
As Image I'm used the URLImage class. Here is my Adapter code:
public static final URLImage.ImageAdapter ToCircle = new URLImage.ImageAdapter() {
int borderWidth = 6;
public EncodedImage adaptImage(EncodedImage downloadedImage, EncodedImage placeholderImage) {
Image originalImage;
// Crop and resize
int w = downloadedImage.getWidth();
int h = downloadedImage.getHeight();
if (w > h) {
originalImage = downloadedImage.subImage(
(w-h)/2, 0,
h, h,
true
);
} else {
originalImage = downloadedImage.subImage(
0, (h-w)/2,
w, w,
true
);
}
int pS = Math.min(placeholderImage.getHeight(), placeholderImage.getWidth());
originalImage = originalImage.scaledHeight(pS);
w = originalImage.getWidth();
h = originalImage.getHeight();
Log.p(Integer.toString(w)+";"+Integer.toString(h));
Image finalImage = Image.createImage(w+2*borderWidth, h+2*borderWidth);
Image maskedImage = originalImage.applyMask(
createCircleMask(w,h)
);
Graphics g = finalImage.getGraphics();
g.setColor(0xff3d00);
g.fillRect(
0, 0,
finalImage.getWidth(), finalImage.getHeight()
// 0, 360
);
g.drawImage(maskedImage, borderWidth, borderWidth);
w = finalImage.getWidth();
h = finalImage.getHeight();
return EncodedImage.createFromImage(
finalImage.applyMask(
createCircleMask(w,h)
),
false
);
}
public Object createCircleMask(int w, int h) {
Image maskImage = Image.createImage(w, h);
Graphics g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0x000000);
g.fillRect(0, 0, w, h);
g.setColor(0xffffff);
g.fillArc(0, 0, w, h, 0, 360);
return maskImage.createMask();
}
public boolean isAsyncAdapter() {
return false;
}
};
In the last trouble Images maybe don't resized to plasehoder Image size...
Also I'm noticed that elements with transparency are displayed as transparent with value 0xFF
The missing padding on the top are due to the includeNativeBool=false since iOS draws under the status bar area.
For the masking see if this helps:
originalImage = originalImage.scaledHeight(pS);
// then add this
originalImage = EncodedImage.createFromImage(originalImage, false);
About the arcs not appearing, its hard to tell how you applied that code to the UI. I'm assuming you didn't use something consistent like a glasspane?
Since a component can trigger its own repaint some custom painting code might not occur in the right order.
Related
How can I calculate distance between a fixed parameter and a target image/pixel?
The following code does color recognition, finds the average position, and draws circle on it. It is able to find if the target (averageX and averageY) is close to leftPd, centerPd, or rightPd. I want to change this code as lane tracking which is at least able to find distance value between leftPd parameter variable and left lane or rightPd parameter variable and right lane.
import processing.video.*;
Capture video;
float threshold = 210;
color trackColor;
PVector leftP, centerP, rightP, target;
void setup() {
leftP = new PVector (80,420);
centerP = new PVector (width/2, 380);
rightP = new PVector (560,420);
size(640, 480);
video = new Capture(this, width, height);
video.start();
trackColor = color(160,0,0); // Start off tracking for red
}
void captureEvent(Capture video) {
// Read image from the camera
video.read();
}
void draw() {
loadPixels();
video.loadPixels();
image(video, 0, 0);
float avgX = 0;
float avgY = 0;
int count = 0;
for (int x = 0; x < video.width; x ++ ) {
for (int y = 0; y < video.height; y ++ ) {
int loc = x + y*video.width;
color currentColor = video.pixels[loc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);
// Using euclidean distance to compare colors
float d = distSq(r1, g1, b1, r2, g2, b2);
if (d < threshold) {
stroke(255);
strokeWeight(1);
point(x,y);
avgX += x;
avgY += y;
count++;
}
}
}
if (count > 0) {
avgX = avgX / count;
avgY = avgY / count;
// Draw a circle at the tracked pixel
fill(trackColor);
strokeWeight(4.0);
stroke(0);
ellipse(avgX, avgY, 20, 20);
text("brightnesslevel: " + trackColor, 20, 60);
text("FPS: " + frameRate, 20, 80);
}
target = new PVector (avgX, avgY);
color c = color(255, 204, 0);
fill(c);
noStroke();
ellipse(leftP.x,leftP.y,16,16); // left param
ellipse(centerP.x,centerP.y,16,16); // center param
ellipse(rightP.x,rightP.y,16,16); // right param
float leftPd = leftP.dist(target);
float centerPd = centerP.dist(target);
float rightPd = rightP.dist(target);
if ( leftPd <= 85 ){
text("To Close left " , 20, 250);
}
if ( centerPd <= 85 ){
text("To Close turn center " , 20, 275);
}
if ( rightPd <= 85 ){
text("To Close turn right " , 20, 300);
}
}
float distSq(float x1,float y1, float z1, float x2, float y2, float z2){
float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) + (z2-z1)*(z2-z1);
return d;
}
void mousePressed() {
// Save color where the mouse is clicked in trackColor variable
int loc = mouseX + mouseY*video.width;
trackColor = video.pixels[loc];
}
I'm trying to create a scrolling image that wraps around a canvas to follow its own tail. I've been trying to use PixelWriters and Readers to save off the vertical pixel lines that are scrolling off the screen to the West, and append these to a new image which, should grow on the RHS (East) of the screen.
It scrolls, but that's all that's happening. I don't understand how to calculate the scanlines, so apologies for this part.
Any help appreciated.
package controller;
import javafx.animation.AnimationTimer;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.image.*;
import javafx.scene.layout.*;
import util.GraphicsUtils;
import java.io.File;
import java.nio.ByteBuffer;
import java.nio.file.Path;
import java.nio.file.Paths;
class ImageContainer extends HBox {
int w, h;
int translatedAmount = 0;
Image image;
Canvas canvas;
long startNanoTime = System.nanoTime();
WritableImage eastImage = null;
public ImageContainer() {
setVisible(true);
load();
w = (int) image.getWidth();
h = (int) image.getHeight();
canvas = new Canvas(w, h);
int edgeX = (int) canvas.getWidth(); //You can set this a little west for visibility sake...whilst debugging
getChildren().addAll(canvas);
GraphicsContext gc = canvas.getGraphicsContext2D();
canvas.setVisible(true);
gc.drawImage(image, 0, 0, w, h);
setPrefSize(w, h);
eastImage = new WritableImage(translatedAmount+1, h); //create a new eastImage
new AnimationTimer() {
public void handle(long currentNanoTime) {
if (((System.nanoTime() - startNanoTime) / 1000000000.0) < 0.05) {
return;
} else {
startNanoTime = System.nanoTime();
}
translatedAmount++;
Image westLine = getSubImageRectangle(image, 1, 0, 1, h); //get a 1 pixel strip from west of main image
PixelReader westLinepixelReader = westLine.getPixelReader(); //create a pixel reader for this image
byte[] westLinePixelBuffer = new byte[1 * h * 4]; //create a buffer to store the pixels collected from the about to vanish westLine
westLinepixelReader.getPixels(0, 0, 1, h, PixelFormat.getByteBgraInstance(), westLinePixelBuffer, 0, 4); //collect the pixels from westLine strip
Image tempImg = eastImage; //save away the current east side image
byte[] tempBuffer = new byte[(int)tempImg.getWidth() * h * 4];
PixelReader tempImagePixelReader = tempImg.getPixelReader(); //create a pixel reader for our temp copy of the east side image
tempImagePixelReader.getPixels(0, 0, (int)tempImg.getWidth(), h, PixelFormat.getByteBgraInstance(), tempBuffer, 0, 4); //save the tempImage into the tempBuffer
eastImage = new WritableImage(translatedAmount+1, h); //create a new eastImage, but one size larger
PixelWriter eastImagePixelWriter = eastImage.getPixelWriter(); //create a pixel writer for this new east side image
eastImagePixelWriter.setPixels(1, 0, (int)tempImg.getWidth(), h, PixelFormat.getByteBgraInstance(), tempBuffer, 0, 4); //copy the temp image in at x=1
eastImagePixelWriter.setPixels((int)tempImg.getWidth(), 0, 1, h, PixelFormat.getByteBgraInstance(), westLinePixelBuffer, 0, 4); //copy the westLine at x=tempImg.width
image = getSubImageRectangle(image, 1, 0, (int) image.getWidth() - 1, h);
gc.drawImage(image, 0, 0); //draw main image
System.out.println(edgeX-eastImage.getWidth());
gc.drawImage(eastImage, edgeX-eastImage.getWidth(), 0); //add lost image lines
}
}.start();
}
public void load() {
Path imagePath = Paths.get("./src/main/resources/ribbonImages/clouds.png");
File f = imagePath.toFile();
assert f.exists();
image = new Image(f.toURI().toString());
}
public Image getSubImageRectangle(Image image, int x, int y, int w, int h) {
PixelReader pixelReader = image.getPixelReader();
WritableImage newImage = new WritableImage(pixelReader, x, y, w, h);
ImageView imageView = new ImageView();
imageView.setImage(newImage);
return newImage;
}
}
Why make this more difficult than necessary? Simply draw the image to the Canvas twice:
public static void drawImage(Canvas canvas, Image sourceImage, double offset, double wrapWidth) {
GraphicsContext gc = canvas.getGraphicsContext2D();
gc.clearRect(0, 0, canvas.getWidth(), canvas.getHeight());
// make |offset| < wrapWidth
offset %= wrapWidth;
if (offset < 0) {
// make sure positive offsets do not result in the previous version
// of the image not being drawn
offset += wrapWidth;
}
gc.drawImage(sourceImage, -offset, 0);
gc.drawImage(sourceImage, wrapWidth - offset, 0);
}
#Override
public void start(Stage primaryStage) {
Image image = new Image("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/402px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg");
Canvas canvas = new Canvas(image.getWidth(), image.getHeight());
primaryStage.setResizable(false);
Scene scene = new Scene(new Group(canvas));
DoubleProperty offset = new SimpleDoubleProperty();
offset.addListener((observable, oldOffset, newOffset) -> drawImage(canvas, image, newOffset.doubleValue(), canvas.getWidth()));
Timeline timeline = new Timeline(
new KeyFrame(Duration.ZERO, new KeyValue(offset, 0, Interpolator.LINEAR)),
new KeyFrame(Duration.seconds(10), new KeyValue(offset, image.getWidth()*2, Interpolator.LINEAR))
);
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
primaryStage.setScene(scene);
primaryStage.sizeToScene();
primaryStage.show();
}
I am trying denoise this image to get better edges
I've tried bilaterFilter, GaussianBlur, morphological close and several threshold but every time I get an image like:
and when I do the HoughLinesP with dilatation of edges is really bad result.
Can some one help me to improve this? Is there a some way to take out those noise?
Frist try: using GaussianBlur, in this case, I must use equalizeHist or I cant get edges even if I use a really low threshold
public class TesteNormal {
static {
System.loadLibrary("opencv_java310");
}
public static void main(String args[]) {
Mat imgGrayscale = new Mat();
Mat imgBlurred = new Mat();
Mat imgCanny = new Mat();
Mat image = Imgcodecs.imread("c:\\cordova\\imagens\\teste.jpg", 1);
int imageWidth = image.width();
int imageHeight = image.height();
Imgproc.cvtColor(image, imgGrayscale, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(imgGrayscale, imgGrayscale);
Imgproc.GaussianBlur(imgGrayscale, imgBlurred, new Size(5, 5), 1.8);
Photo.fastNlMeansDenoising(imgBlurred, imgBlurred);
Imshow.show(imgBlurred);
Mat imgKernel = Imgproc.getStructuringElement(Imgproc.MORPH_CROSS, new Size(3, 3));
Imgproc.Canny(imgBlurred, imgCanny, 0, 80);
Imshow.show(imgCanny);
Imgproc.dilate(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 2);
Imgproc.erode(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 1);
Imshow.show(imgCanny);
Mat lines = new Mat();
int threshold = 100;
int minLineSize = imageWidth < imageHeight ? imageWidth / 3 : imageHeight / 3;
int lineGap = 5;
Imgproc.HoughLinesP(imgCanny, lines, 1, Math.PI / 360, threshold, minLineSize, lineGap);
System.out.println(lines.rows());
for(int x = 0; x < lines.rows(); x++) {
double[] vec = lines.get(x, 0);
double x1 = vec[0], y1 = vec[1], x2 = vec[2], y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Imgproc.line(image, start, end, new Scalar(255, 0, 0), 1);
}
Imshow.show(image);
}
}
Second try: using bilateral filter:
public class TesteNormal {
static {
System.loadLibrary("opencv_java310");
}
public static void main(String args[]) {
Mat imgBlurred = new Mat();
Mat imgCanny = new Mat();
Mat image = Imgcodecs.imread("c:\\cordova\\imagens\\teste.jpg", 1);
int imageWidth = image.width();
int imageHeight = image.height();
Imgproc.bilateralFilter(image, imgBlurred, 10, 35, 35);
Imshow.show(imgBlurred);
Mat imgKernel = Imgproc.getStructuringElement(Imgproc.MORPH_CROSS, new Size(3, 3));
Imgproc.Canny(imgBlurred, imgCanny, 0, 120);
Imshow.show(imgCanny);
Imgproc.dilate(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 2);
Imgproc.erode(imgCanny, imgCanny, imgKernel, new Point(-1, -1), 1);
Imshow.show(imgCanny);
Mat lines = new Mat();
int threshold = 100;
int minLineSize = imageWidth < imageHeight ? imageWidth / 3 : imageHeight / 3;
int lineGap = 5;
Imgproc.HoughLinesP(imgCanny, lines, 1, Math.PI / 360, threshold, minLineSize, lineGap);
System.out.println(lines.rows());
for(int x = 0; x < lines.rows(); x++) {
double[] vec = lines.get(x, 0);
double x1 = vec[0], y1 = vec[1], x2 = vec[2], y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Imgproc.line(image, start, end, new Scalar(255, 0, 0), 1);
}
Imshow.show(image);
}
}
As suggested, I am trying use opencv contrib, using StructuredEdgeDetection. I am testing using a fixed image.
Frist I compile opencv with contrib
Segund I wrote the C++ code:
JNIEXPORT jobject JNICALL Java_vi_pdfscanner_main_ScannerEngine_getRandomFlorest(JNIEnv *env, jobject thiz) {
Mat mbgra = imread("/storage/emulated/0/Resp/coco.jpg", 1);
Mat3f fsrc;
mbgra.convertTo(fsrc, CV_32F, 1.0 / 255.0); // when I run those convertTo, I got all back image, that way I got no edges.
const String model = "/storage/emulated/0/Resp/model.yml.gz";
Ptr<cv::ximgproc::StructuredEdgeDetection> pDollar = cv::ximgproc::createStructuredEdgeDetection(model);
Mat edges;
__android_log_print(ANDROID_LOG_VERBOSE, APPNAME, "chamando edges");
pDollar->detectEdges(fsrc, edges);
imwrite( "/storage/emulated/0/Resp/edges.jpg", edges);
jclass java_bitmap_class = (jclass)env->FindClass("android/graphics/Bitmap");
jmethodID mid = env->GetMethodID(java_bitmap_class, "getConfig", "()Landroid/graphics/Bitmap$Config;");
jobject bitmap_config = env->CallObjectMethod(bitmap, mid);
jobject _bitmap = mat_to_bitmap(env,edges,false,bitmap_config);
return _bitmap;
}
and I wrote this java wapper
public class ScannerEngine {
private static ScannerEngine ourInstance = new ScannerEngine();
public static ScannerEngine getInstance() {
return ourInstance;
}
private ScannerEngine() {
}
public native Bitmap getRandomFlorest(Bitmap bitmap);
static {
System.loadLibrary("opencv_java3");
System.loadLibrary("Scanner");
}
}
this point is, when I run those lines
Mat mbgra = imread("/storage/emulated/0/Resp/coco.jpg", 1); //image is ok
Mat3f fsrc;
mbgra.convertTo(fsrc, CV_32F, 1.0 / 255.0); //now image got all back, someone have some ideia why?
Thanks very much!
The Result about are strong, like this
Original Image:
http://prntscr.com/cyd8qi
Edges Image:
http://prntscr.com/cyd9ax
Its run on android 4.4 (api lvl 19) in a really old device.
That's all,
Thanks you very much
I have written the following code here the background image is displaying but the image did not cover the full background
private Bitmap background;
int mWidth = Display.getWidth();
int mHeight = Display.getHeight();
public MyScreen()
{
// Set the displayed title of the screen
//backgroundBitmap = Bitmap.getBitmapResource("slidimage.png");
final Bitmap background = Bitmap.getBitmapResource("slidimage.png");
HorizontalFieldManager vfm = new HorizontalFieldManager(USE_ALL_HEIGHT | USE_ALL_WIDTH) {
public void paint(Graphics g) {
g.drawBitmap(0, 0,mWidth, mHeight, background, 0, 0);
super.paint(g);
}
};
add(vfm);
public static Bitmap resizeBitmap(Bitmap image, int width, int height)
{
int rgb[] = new int[image.getWidth()*image.getHeight()];
image.getARGB(rgb, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
int rgb2[] = rescaleArray(rgb, image.getWidth(), image.getHeight(), width, height);
Bitmap temp2 = new Bitmap(width, height);
temp2.setARGB(rgb2, 0, width, 0, 0, width, height);
return temp2;
}
You can use the above method to resize the image
just pass the image to be resized and its width and height .
and the function will return the resized image .
where rescale Array is the below method
private static int[] rescaleArray(int[] ini, int x, int y, int x2, int y2)
{
int out[] = new int[x2*y2];
for (int yy = 0; yy < y2; yy++)
{
int dy = yy * y / y2;
for (int xx = 0; xx < x2; xx++)
{
int dx = xx * x / x2;
out[(x2 * yy) + xx] = ini[(x * dy) + dx];
}
}
return out;
}
I am trying to apply Sepia effect on an Image in Blackberry.
I have tried it but doesn't get the 100% sepia effect.
This is code that I have tried for sepia effect.
I have used getARGB() and setARGB() methods of bitmap class.
public Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity=30;//value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// WritableRaster raster = img.getRaster();
// We need 3 integers (for R,G,B color values) per pixel.
int[] pixels = new int[w*h*3];
// raster.getPixels(0, 0, w, h, pixels);
bitmap.getARGB(pixels, 0, w, x, y, w, h);
// Process 3 ints at a time for each pixel.
// Each pixel has 3 RGB colors in array
for (int i=0;i<pixels.length; i+=3) {
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r>255) r=255;
if (g>255) g=255;
if (b>255) b=255;
// Darken blue color to increase sepia effect
b-= sepiaIntensity;
// normalize if out of bounds
if (b<0) {
b=0;
}
if (b>255) {
b=255;
}
pixels[i] = r;
pixels[i+1]= g;
pixels[i+2] = b;
}
//raster.setPixels(0, 0, w, h, pixels);
bitmap.setARGB(pixels, 0, w, 0, 0, w, h);
return bitmap;
}
This call:
bitmap.getARGB(pixels, 0, w, x, y, w, h);
returns an int[] array where each int represents a color in the format 0xAARRGGBB. This differs from you previous code using JavaSE's Raster class.
EDIT: The method fixed for BlackBerry:
public static Bitmap changetoSepiaEffect(Bitmap bitmap) {
int sepiaIntensity = 30;// value lies between 0-255. 30 works well
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces black/white image
int sepiaDepth = 20;
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Unlike JavaSE's Raster, we need an int per pixel
int[] pixels = new int[w * h];
// We get the whole image
bitmap.getARGB(pixels, 0, w, 0, 0, w, h);
// Process each pixel component. A pixel comes in the format 0xAARRGGBB.
for (int i = 0; i < pixels.length; i++) {
int r = (pixels[i] >> 16) & 0xFF;
int g = (pixels[i] >> 8) & 0xFF;
int b = pixels[i] & 0xFF;
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r > 255)
r = 255;
if (g > 255)
g = 255;
if (b > 255)
b = 255;
// Darken blue color to increase sepia effect
b -= sepiaIntensity;
// normalize if out of bounds
if (b < 0) {
b = 0;
}
if (b > 255) {
b = 255;
}
// Now we compose a new pixel with the modified channels,
// and an alpha value of 0xFF (full opaque)
pixels[i] = ((r << 16) & 0xFF0000) | ((g << 8) & 0x00FF00) | (b & 0xFF) | 0xFF000000;
}
// We return a new Bitmap. Trying to modify the one passed as parameter
// could throw an exception, since in BlackBerry not all Bitmaps are modifiable.
Bitmap ret = new Bitmap(w, h);
ret.setARGB(pixels, 0, w, 0, 0, w, h);
return ret;
}