What is BlackBerry's equivalent to Java ME's Image.createImage() from an existing (loaded) image - blackberry

I have the following Java ME code that I'd like to port to BlackBerry:
Image imgAll = Image.createImage("/fontDigits_200x20.png");
imageDigits = new Image[10];
for(int i = 0; i < imageDigits.length; i++)
imageDigits[i] = Image.createImage(imgAll, i * 20, 0, 20, 20, Sprite.TRANS_NONE);
Basically, it's one image of ten digits that I want to split into 10 individual images and store them into an array. I looked through the docs, but can't find anything similar on EncodedImage or Graphics.
Thank you for any pointers!
UPDATE:
Good news! Apparently there's no way to crop an EncodedImage in such a way as to have a new EncodedImage which is a cropped subset of the original. However, you can do that with a Bitmap, which essentially is the same.

you can use
Bitmap.getARGB(int[] argbData,
int offset,
int scanLength,
int x,
int y,
int width,
int height)
after loading your image
Bitmap imgAll = Bitmap.getBitmapResource("fontDigits_200x20.png");
and off course you can create new Bitmap from this ARGB data.

You can do it directly with the Bitmap.scaleInto function:
Bitmap src;
Bitmap dst = new Bitmap(64,32);
int filterType = Bitmap.FILTER_BILINEAR;
src.scaleInto(srcLeft, srcTop, srcWidth, srcHeight, dst, dstLeft, dstTop, dstWidth, dstHeight, filterType);

Related

Get RGB value from each pixel of camera view

For my android app I have code that looks like this:
Bitmap currentBitmap = textureView.getBitmap();
int pixelCount = textureView.getWidth() * textureView.getHeight();
int redSum, greenSum, blueSum = 0;
int[] pixels = new int[pixelCount];
// get pixels as RGB-Integer to pixels[] array
currentBitmap.getPixels(pixels, 0, textureView.getWidth(), 0, 0, textureView.getWidth(), textureView.getHeight());
// extract the red component from all pixels and add it to measurement
for (int pixelIndex = 0; pixelIndex < pixelCount; pixelIndex++) {
redSum += Color.red(pixels[pixelIndex]);
greenSum += Color.green(pixels[pixelIndex]);
blueSum += Color.blue(pixels[pixelIndex]);
}
It takes every pixel from a live camera image and gets the RGB value from it. Is there a similar solution for a swift iOS version?
I am having trouble with the different image formats in swift and how to get image data from them. My camera image is in the form of CIImage.

Problem Understanding OpenCV Convert Mat to BufferedImage

I am a new user of JAVA OpenCV, and I am just learning through the official tutorial today about how to convert a Mat object to BufferedImage.
From the demo code, I can understand that the input image source is a Matrix form, and then sourcePixels seems going to be an array of bytes representation of the image, so we need to get the values from the original matrix to the sourcePixels. Here the sourcePixels has the length of the whole image bytes length (with size: w * h * channels), so it would take the whole image byte values at once.
Then it comes this which is not intuitive to me. The System.arraycopy() seems copying the values from the sourcePixels to the targetPixels, but what actaully returns is image. I can guess from the code that targetPixels has relationship with image, but I don't see how we copy values from sourcePixels to targetPixels, but it actually affects values of image?
Here's the demo code. Thanks!
private static BufferedImage matToBufferedImage(Mat original)
{
BufferedImage image = null;
int width = original.width(), height = original.height(), channels = original.channels();
byte[] sourcePixels = new byte[width * height * channels];
original.get(0, 0, sourcePixels);
if (original.channels() > 1)
{
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
}
else
{
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return image;
}
Each BufferedImage is backed by a byte array just like the Mat class from OpenCV, the call to ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); returns this underlying byte array and assigns it to targetPixels, in other words, targetPixels points to this underlying byte array that the BufferedImage image is currently wrapping around, so when you call System.arraycopy you are actually copying from the source byte array into the byte array of the BufferedImage, that's why the image is being returned, because at that point, the underlying byte array that image encapsulates contains the pixel data from original, it's like this smal example, where after making b points to a, modifications to b will also reflect in a, just like tagetPixels, because it points to the byte array image is encapsulating, copying from sourcePixels into targetPixels will also change the image
int[] a = new int[1];
int[] b = a;
// Because b references the same array that a does
// Modifying b will actually change the array a is pointing to
b[0] = 1;
System.out.println(a[0] == 1);

Convert grayscale HImage (MVtec Halcon library) to c# bitmap

Is there an easy way to convert the grayscale Halcon/MVtec Himage object to a c# bitmap? Sample code exists here (mvtec documentation) for a color image:
HTuple type, width, height;
HImage patras = new HImage("patras");
HImage interleaved = patras.InterleaveChannels("argb", "match", 255);
IntPtr ptr = interleaved.GetImagePointer1(out type, out width, out height);
Image img = new Bitmap(width/4, height, width,
PixelFormat.Format32bppPArgb, ptr);
pictureBox.Image = img;
But from this sample, it is not clear how I can work with grayscale images.
I have researched your problem and at this link, https://multipix.com/supportblog/halcon-bitmap-himage-conversion/ it explains how to create a bitmap object for both RBG channel and single channels which is what you are looking for.
It states:
The creation of a bitmap from a HALCON image can be done through the constructors of the bitmap class. With single channel images this is straight forward by using the pointer from the operator get_image_pointer1 and the dimensions of the image.
I believe this means that it is the exact same format as the sample code you have given, but you just remove the line HImage interleaved = patras.InterleaveChannels("argb", "match", 255);
Your code will probably look like this if patras is a gray scale image:
HTuple type, width, height;
HImage patras = new HImage("patras");
IntPtr ptr = patras.GetImagePointer1(out type, out width, out height);
Image img = new Bitmap(width/4, height, width, PixelFormat.Format16bppGrayScale, ptr);
pictureBox.Image = img;
Since you cannot directly create 8-bit grayscale bitmap, the quickest way would be to convert gray image into RGB:
HImage hiImageNew = new HImage();
hiImageNew = hiImage.Compose3(hiImage, hiImage);
hiImageNew = hiImageNew.InterleaveChannels("argb", "match", 255);
IntPtr ptr = hiImageNew.GetImagePointer1(out htType, out htWidth, out htHeight);
System.Drawing.Image bImage = new Bitmap(htWidth/4, htHeight, htWidth, System.Drawing.Imaging.PixelFormat.Format32bppPArgb, ptr);
check this:
https://github.com/Joncash/HanboAOMClassLibrary/blob/master/Hanbo.Helper/ImageConventer.cs
In this Class you can find the function in which you can choose if you have a grayscale or rgb image
public static Bitmap ConvertHalconImageToBitmap(HObject halconImage, bool isColor)

How to resize images after uploading with Uploadify?

I have implemented Uploadify in my ASP.NET MVC 3 application to upload images, but I now want to resize the images that I upload. I am not sure on what next to do in order to start resizing. I think there might be various ways to perform this resize, but I have not been able to find any example of this as yet. Can anyone suggest some way of doing this? Thanx
Here's a function you can use on the server side. I use it to process my images after uploadify is done.
private static Image ResizeImage(Image imgToResize, Size size)
{
int sourceWidth = imgToResize.Width;
int sourceHeight = imgToResize.Height;
float nPercent = 0;
float nPercentW = 0;
float nPercentH = 0;
nPercentW = ((float)size.Width / (float)sourceWidth);
nPercentH = ((float)size.Height / (float)sourceHeight);
if (nPercentH < nPercentW)
nPercent = nPercentH;
else
nPercent = nPercentW;
int destWidth = (int)(sourceWidth * nPercent);
int destHeight = (int)(sourceHeight * nPercent);
Bitmap b = new Bitmap(destWidth, destHeight);
Graphics g = Graphics.FromImage((Image)b);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(imgToResize, 0, 0, destWidth, destHeight);
g.Dispose();
return (Image)b;
}
Here's how I use it:
int length = (int)stream.Length;
byte[] tempImage = new byte[length];
stream.Read(tempImage, 0, length);
var image = new Bitmap(stream);
var resizedImage = ResizeImage(image, new Size(300, 300));
Holler if you need help getting it running.
You have 3 ways:
Use GDI+ library (example of code - C# GDI+ Image Resize Function)
3-rd part components (i use ImageMagick - my solution: Generating image thumbnails in ASP.NET?)
Resize images on user side (some uploaders can do this)

How do I create a EncodedImage from a javax.microedition.lcdui.Image

I am developing a j2me application for the BlackBerry. I download a large GIF and now want to scale the image to fit the screen. I am looking for better performance that scaling the image using by using approaches like this.
I haven't used the microedition Image myself, but I've worked with RIM's Image class recently, and it seems the least-common-denominator representation is an array of RGB values. I see that lcdui.Image has a method
getRGB(int[] rgbData, int offset, int scanlength, int x, int y, int width, int height)
which should give the array you need. You can then get a RIM Bitmap or Image or PNGEncodedImage with
Bitmap.setARGB(int[] data, int offset, int scanLength, int left, int top, int width, int height)
ImageFactory.createImage(Bitmap bitmap)
PNGEncodedImage.encode(Bitmap bitmap)

Resources