Converting aligned array uint8[8] to double - ios

I am facing a bit of a challenge trying to convert an aligned array uint8[8] to a double.
It was particularly easy to convert uint8[4] to long with bit-operations, but i understand that the double can become messy in terms of a sign bit?
In Java i simply use ByteBuffer.wrap(bytes).getDouble() but i assume its not that easy in C.
I tried to implement this code, but the last command gives the error Expression is not assignable and Shift count >= width of type
long tempHigh = 0;
long tempLow = 0;
double sum = 0;
tempHigh |= buffer[0] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[1] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[2] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[3] & 0xFF;
tempLow |= buffer[4] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[5] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[6] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[7] & 0xFF;
sum |= ((tempHigh & 0xFFFF) <<= 32) + (tempLow & 0xFFFF);
How can this procedure be done correctly or just resolve the error i have made?
Thanks in advance.

double is a floating-point type; it doesn't support bitwise operations such as |.
You could do something like:
double sum;
memcpy(&sum, buffer, sizeof(sum));
But be aware of endianness issues.

The portable way to do it is to read out the sign, exponent, and mantissa values into integer variables with bitwise arithmetic, then call ldexp to apply the exponent.
OK, here's some code. Beware it might have mismatched parentheses or off-by-one errors.
unsigned char x[8]; // your input; code assumes little endian
long mantissa = ((((((x[6]%16)*256 + x[5])*256 + x[4])*256 + x[3])*256 + x[2])*256 + x[1])*256 + x[0];
int exp = x[7]%128*16 + x[6]/16 - 1023;
int sign = 1-x[7]/128*2;
double y = sign*ldexp(0x1p53 + mantissa, exp-53);

How about a union? Write to the long part as you have, then the double is automagically correct. Something like this:
union
{
double sum;
struct
{
long tempHigh;
long tempLow;
}v;
}u;
u.v.tempHigh = 0;
u.v.tempHigh |= buffer[0] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[1] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[2] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[3] & 0xFF;
u.v.tempLow |= buffer[4] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[5] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[6] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[7] & 0xFF;
printf("%f", u.sum);

Related

How to implement fast majority voting for a bit matrix

I have a representation of a large bit matrix where I'd like to efficiently retrieve the majority bit for each matrix column (^= bit value that occurs most often). The background is that the matrix rows represent ORB feature descriptors and the value I'm looking for resembles the mean in the Hamming domain.
The implementation I'm currently working with looks like this
// holds column-sum for each bit
std::vector<int> sum(32 * 8, 0);
// cv::Mat mat is a matrix of values € [0, 255] filled elsewhere
for (size_t i = 0; i < mat.cols; ++i)
{
const cv::Mat &d = mat.row(i);
const unsigned char *p = d.ptr<unsigned char>();
// count bits set column-wise
for (int j = 0; j < d.cols; ++j, ++p)
{
if (*p & (1 << 7)) ++sum[j * 8];
if (*p & (1 << 6)) ++sum[j * 8 + 1];
if (*p & (1 << 5)) ++sum[j * 8 + 2];
if (*p & (1 << 4)) ++sum[j * 8 + 3];
if (*p & (1 << 3)) ++sum[j * 8 + 4];
if (*p & (1 << 2)) ++sum[j * 8 + 5];
if (*p & (1 << 1)) ++sum[j * 8 + 6];
if (*p & (1)) ++sum[j * 8 + 7];
}
}
cv::Mat mean = cv::Mat::zeros(1, 32, CV_8U);
unsigned char *p = mean.ptr<unsigned char>();
const int N2 = (int)mat.rows / 2 + mat.rows % 2;
for (size_t i = 0; i < sum.size(); ++i)
{
if (sum[i] >= N2)
{
// set bit in mean only if the corresponding matrix column
// contains more 1s than 0s
*p |= 1 << (7 - (i % 8));
}
if (i % 8 == 7) ++p;
}
The bottleneck is the big loop with all the bit shifting. Is there any way or known bit magic to make this any faster?

Zero fill right shift in swift

(byte) ((val & 0xff00) >>> 8);
This is the Java code. I want to convert this code to Swift. But there is no >>> operator in swift. How can I use Zero fill right shift in Swift?
If you use the truncatingBitPattern initializer of integer types
to extract a byte then you don't have to mask the value and it does not matter if the shift operator fills with zeros or ones (which depends on whether the source
type is unsigned or signed).
Choose Int8 or UInt8 depending
on whether the byte should be interpreted as a signed or
unsigned number.
let value = 0xABCD
let signedByte = Int8(truncatingBitPattern: value >> 8)
print(signedByte) // -85
let unsignedByte = UInt8(truncatingBitPattern: value >> 8)
print(unsignedByte) // 171
Operator >> in Swift is zero-fill (for unsigned integers):
The bit-shifting behavior for unsigned integers is as follows:
Existing bits are moved to the left or right by the requested number of places.
Any bits that are moved beyond the bounds of the integer’s storage are discarded.
Zeros are inserted in the spaces left behind after the original bits are moved to the left or right.
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html#//apple_ref/doc/uid/TP40014097-CH27-ID29
You don't need zero fill shift int this case because a byte is only 8-bits.
The code you have is the same as
(byte) (((val & 0xFF00) >> 8) & 0xFF)
or
(byte) ((val & 0xFF00) >> 8)
or
(byte) (val >> 8)
Zero-fill right shift operator doesn't exist in swift/objc, unfortunately. As an alternative / workaround:
// java
// let's say we want to zero-fill right shift 4 bits
int num = -333;
num >>>= 4; // num: 268435435
// objc
NSInteger num = -333;
num >>= 1;
if (num < 0) num ^= NSIntegerMin;
num >>= 3; // num: 268435435
// swift (assume we are dealing with 32 bit integer)
var num: Int32 = -333
num >>= 1
if num < 0 {
num ^= Int32.min
}
num >>= 3 // num: 268435435
Essentially get rid of the sign bit when negative.

Is there a direct way to get a unique value representing RGB color in opencv C++

My image is a RGB image. I want to get a unique value (such as a unicode value) to represent the RGB color value of a certain pixel. For example If the pixels red channel=23, Green channel=200,Blue channel=45 this RGB color could be represented by 232765. I wish if there is a direct opencv c++ function to get such a value from a pixel. And note that this unique value should be unique for that RGB value.
I want something like this and I know this is not correct.
uniqueColorForPixel_i_j=(matImage.at<Vec3b>(i,j)).getUniqueColor();
I hope something could be done if we can get the Scalar value of a pixel. And as in the way RNG can generate a random Scalar RGB value from number, can we get the inverse...
Just a small sample code to show how to pass directly a Vec3b to the function, and an alternative way to shift-and approach.
The code is based on this answer.
UPDATE
I added also a simple struct BGR, that will handle more easily the conversion between Vec3b and unsigned.
UPDATE 2
The code in your question:
uniqueColorForPixel_i_j=(matImage.at<Vec3b>(i,j)).getUniqueColor();
doesn't work because you're trying to call the method getUniqueColor() on a Vec3b which hasn't this method. You should instead pass the Vec3b as the argument of unsigned getUniqueColor(const Vec3b& v);.
The code should clarify this:
#include <opencv2\opencv.hpp>
using namespace cv;
unsigned getUniqueColor_v1(const Vec3b& v)
{
return ((v[2] & 0xff) << 16) + ((v[1] & 0xff) << 8) + (v[0] & 0xff);
}
unsigned getUniqueColor_v2(const Vec3b& v)
{
return 0x00ffffff & *((unsigned*)(v.val));
}
struct BGR
{
Vec3b v;
unsigned u;
BGR(const Vec3b& v_) : v(v_){
u = ((v[2] & 0xff) << 16) + ((v[1] & 0xff) << 8) + (v[0] & 0xff);
}
BGR(unsigned u_) : u(u_) {
v[0] = uchar(u & 0xff);
v[1] = uchar((u >> 8) & 0xff);
v[2] = uchar((u >> 16) & 0xff);
}
};
int main()
{
Vec3b v(45, 200, 23);
unsigned col1 = getUniqueColor_v1(v);
unsigned col2 = getUniqueColor_v2(v);
unsigned col3 = BGR(v).u;
// col1 == col2 == col3
//
// hex: 0x0017c82d
// dec: 1558573
Vec3b v2 = BGR(col3).v;
// v2 == v
//////////////////////////////
// Taking values from a mat
//////////////////////////////
// Just 2 10x10 green mats
Mat mat1(10, 10, CV_8UC3);
mat1.setTo(Vec3b(0, 255, 0));
Mat3b mat2(10, 10, Vec3b(0, 255, 0));
int row = 2;
int col = 3;
unsigned u1 = getUniqueColor_v1(mat1.at<Vec3b>(row, col));
unsigned u2 = BGR(mat1.at<Vec3b>(row, col)).u;
unsigned u3 = getUniqueColor_v1(mat2(row, col));
unsigned u4 = BGR(mat2(row, col)).u;
// u1 == u2 == u3 == u4
return 0;
}

iOS convert IP Address to integer and backwards

Say I have a NSString
NSString *myIpAddress = #"192.168.1.1"
I want to convert this to a integer - increment it an then convert it back to NSString.
Does iOS have an easy way to do this other than using bit mask and shifting and sprintf?
Something like this is what I do in my app:
NSArray *ipExplode = [string componentsSeparatedByString:#"."];
int seg1 = [ipExplode[0] intValue];
int seg2 = [ipExplode[1] intValue];
int seg3 = [ipExplode[2] intValue];
int seg4 = [ipExplode[3] intValue];
uint32_t newIP = 0;
newIP |= (uint32_t)((seg1 & 0xFF) << 24);
newIP |= (uint32_t)((seg2 & 0xFF) << 16);
newIP |= (uint32_t)((seg3 & 0xFF) << 8);
newIP |= (uint32_t)((seg4 & 0xFF) << 0);
newIP++;
NSString *newIPStr = [NSString stringWithFormat:#"%u.%u.%u.%u",
((newIP >> 24) & 0xFF),
((newIP >> 16) & 0xFF),
((newIP >> 8) & 0xFF),
((newIP >> 0) & 0xFF)];

Converting InputStream data to different datatypes

I have been working with InputStreams in Objective-C and seem to have taken the wrong step into the processing of the received data.
I am receiving chunks of bytes, which are read and converted to datatypes, as integers, floats, doubles etc.
So far my process is like this:
readBuffer = (uint8_t *) malloc(4);
memset(readBuffer, 0, 4);
while (length < byteLength) {
length = [InputStream read:readBuffer 4];
}
[something fourByteUint8ToLong:readBuffer];
Now in order to do some converting from the 4-bytes to long
- (long) fourByteUint8ToLong:(uint8_t *) buffer
{
long temp = 0;
temp |= buffer[0] & 0xFF;
temp <<= 8;
temp |= buffer[1] & 0xFF;
temp <<= 8;
temp |= buffer[2] & 0xFF;
temp <<= 8;
temp |= buffer[3] & 0xFF;
return temp;
}
Is there not an easier way to handle this by using the objective-C classes?
In that case how? 8-bytes -> double, 4-bytes -> float.
Thanks in advance.
Problem solved by using CoreFoundation.h class function:
uint8_t * buffer;
buffer = (uint8_t *) malloc(8);
double tempDouble = CFConvertFloat64SwappedToHost(*((CFSwappedFloat64*)buffer));

Resources