I am trying to display an image into UIImageView through this method NSData.FromArray() but it gives an null reference exception my code is the following:
Byte _imgData = GetRawData(_imgPath); // this method get the byte array of size ([131072])
NSData _data = NSData.FromArray(_imgData);
ImgView.Image = UIImage.LoadFromData(_data) //in this _data shows the byte array value but the ImgView.Image shows null
Byte array get the RLE Compress data.
ImgView.Image = UIImage.LoadFromData(_data)
_data is RLE compress byte data I don't know how I can convert it to IOS supported image format.
It gives an error, any suggestions to solve this issue?
UIImage.Load* methods will return null if the data (be it a NSData or a filename) is invalid and/or the format is unknown to UIImage.
You'll likely need to use a lower level API to provide some data (e.g. width, height, depth) about any RAW data. Once that's done you can construct an UIImage on top of it.
Related
I am a new user of JAVA OpenCV, and I am just learning through the official tutorial today about how to convert a Mat object to BufferedImage.
From the demo code, I can understand that the input image source is a Matrix form, and then sourcePixels seems going to be an array of bytes representation of the image, so we need to get the values from the original matrix to the sourcePixels. Here the sourcePixels has the length of the whole image bytes length (with size: w * h * channels), so it would take the whole image byte values at once.
Then it comes this which is not intuitive to me. The System.arraycopy() seems copying the values from the sourcePixels to the targetPixels, but what actaully returns is image. I can guess from the code that targetPixels has relationship with image, but I don't see how we copy values from sourcePixels to targetPixels, but it actually affects values of image?
Here's the demo code. Thanks!
private static BufferedImage matToBufferedImage(Mat original)
{
BufferedImage image = null;
int width = original.width(), height = original.height(), channels = original.channels();
byte[] sourcePixels = new byte[width * height * channels];
original.get(0, 0, sourcePixels);
if (original.channels() > 1)
{
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
}
else
{
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return image;
}
Each BufferedImage is backed by a byte array just like the Mat class from OpenCV, the call to ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); returns this underlying byte array and assigns it to targetPixels, in other words, targetPixels points to this underlying byte array that the BufferedImage image is currently wrapping around, so when you call System.arraycopy you are actually copying from the source byte array into the byte array of the BufferedImage, that's why the image is being returned, because at that point, the underlying byte array that image encapsulates contains the pixel data from original, it's like this smal example, where after making b points to a, modifications to b will also reflect in a, just like tagetPixels, because it points to the byte array image is encapsulating, copying from sourcePixels into targetPixels will also change the image
int[] a = new int[1];
int[] b = a;
// Because b references the same array that a does
// Modifying b will actually change the array a is pointing to
b[0] = 1;
System.out.println(a[0] == 1);
In my project, I create a byte:
Byte bytes = {0x7E, 0x7F};
But there comes a warning:
Excess elements in scalar initializer
What this mean? does it effect me?
You are trying to create single dimensional array where you are not declaring size of it there's no memory set aside for an array so that's why the compiler is generating a warning which may cause an runtime error ahead.
Set array size:
// single dimen
Byte bytes[2] = {0x7E, 0x7F};
// double dimen
Byte bytes[1][2] = {{0x7E, 0x7F}, {0x7E, 0x7F}};
how stupid! I create a byte array should be like this:
Byte bytes[] = {0x7E, 0x7F}
I did not write the [].
How to remove a header from a file that is .bmp using objective c.
I am getting 54 bytes extra. But in order to view image have to remove these 54 bytes and get actual image bytes.
NSData *data = [NSData dataWithContentsOfFile:snapshotFile options:0 error:&error];
NSBitmapImageRep *imagerep = [NSBitmapImageRep imageRepWithData:data];
NSData *bytes = [imagerep representationUsingType:NSBMPFileType properties:nil];
My image is of width = 1280 and height = 800, so total image bytes should be 800*1280*5= 4096000.
But when i checked for bytes calculated above it was 4096054 bytes.
It seems that 54 extra is header.
I want to remove these header bytes in order to get actual image.
Any help will be really appreciated.
And please excuse in case this has already been answered.
BMP is the native bitmap format of Windows and is used to store virtually any type of bitmap data. For Cocoa/Cocoa-touch, we typically use PNG. As for reducing the size of an image programmatically in Objective-C, please:
Refer to this!
Using the following code, I am attempting to convert three float values into a single NSData object, which I can then transmit over a serial port.
float kP = [[self.kPTextField stringValue] floatValue];
float kI = [[self.kITextField stringValue] floatValue];
float kD = [[self.kDTextField stringValue] floatValue];
float combined[] = {kP, kI, kD};
NSData *dataPackage = [NSData dataWithBytes:&combined length:sizeof(combined)];
[self.serialPort sendData:dataPackage];
The problem is that it doesn't seem to work very well. Whenever I use the "sizeof()" C function, it tells me that the "dataPackage" is only 8 bytes, even though 3 float values should total 12 bytes. I am receiving the data with an Arduino. It sees the bytes coming in, but they aren't legible at all. I don't think it's a problem on the Arduino side of things (but who knows?).
Any help would be appreciated! I'm not a CS major, just a bio major, and I've never learned this stuff in a formal way so I am sorry if my question is ridiculous. I've spent several hours searching the net about this problem and haven't found anything that helped.
EDIT: It turns out this code was completely correct. I made a simple mistake on the arduino side of things by using a struct instead of a union to take the bytes and convert them back into floats.
For others who may be in a similar predicament, a successful way to convert floats from bytes coming out of the serial port is the following:
(at top of implementation file)
union {
float pidVals[3];
byte bytes[12];
} pidUnion;
(inside loop)
if (Serial.available() > 11) {
for (int i = 0; i < 12; i++) {
pidUnion.bytes[i] = Serial.read();
}
}
//Now, you can get access to all three floats of data using pidUnion.pidVals[0], pidUnion.pidVals[1], etc.
This probably isn't the best or most reliable way to transmit data. There is no error-correcting mechanism or packet structure. But it does work in a pinch. I imagine you would probably want to find a way to create a packet of data along with a hash byte to make sure all of the data is correct on the other side, this code doesn't have any of that though.
There are multiple problems with your code.
First, you don't want to use stringValue on a text field. You want the text property, which is a string.
So the first line should read like this:
float kP = [self.kPTextField.text floatValue];
Second, in C, an array of things is a pointer. The data type of
float combined[]
and
float *combined
is identical. Both are "pointer to float".
So this code:
NSData *dataPackage = [NSData dataWithBytes:&combined
length: sizeof(combined)];
Should not have an ampersand in front of combined. It should read:
NSData *dataPackage = [NSData dataWithBytes:combined
length: sizeof(combined)];
Third, what matters is sizeof(combined), not sizeof(dataPackage).
The expression sizeof(dataPackage) will tell you the size of the variable dataPackage, which is a pointer to an NSData object. You must be running on a 64 bit device, where pointers are 8 bytes.
To test the length of the data in your NSData object, you want to ask it with the length property:
NSLog(#"sizeof(combined) = %d", sizeof(combined)";
NSData *dataPackage = [NSData dataWithBytes:&combined
length: sizeof(combined)];
NSLog(#"dataPackage.length = %d", dataPackage.length";
Both log statements should display values of 12.
I simply want to convert an Emgu.Cv.Image<,> from a pointer, I am using the following code:
Size img = CvInvoke.cvGetSize(frame);
Image<Bgr, Byte> tImg = new Image<Bgr, byte>(img.Width, img.Height, 0, frame);
I don't know what value to give in 3rd parameter of Image<,> constructor that takes a pointer. It says Size of aligned image row in bytes what does that mean?
Note that image width has to be a multiple of 4 since some OpenCV code optimization is based on this assumption when CVImage is constructed from a memory 1D array.