I have a variable (unsigned int) part_1.
If I do this:
NSLog(#"%u %08x", part_1, part_1); (print unsigned value, and hex value) it outputs:
2063597568 7b000000
(only first two will have values).
I want to convert this to
0000007b
So i've tried doing
unsigned int part_1b = part_1 >> 6 (and lots of variations)
But this outputs:
32243712 01ec0000
Where am i going wrong?
You want to shift by 6*4 = 24 bits, not just 6 bits. Each '0' in the hex printf represents 4 bits.
unsigned int part_1b = part_1 >> 24;
^^
Related
I'm looking for how to turn bytes into a signed int using lua 5.1.5, so far I've only been able to find solutions for lua 5.2 onward, and they are not backward compatible.
I have solutions for how to turn bytes into unsigned integers, like so:
payload_t.temperature=tonumber(utility.hex2str(string.sub(payload,32,33)),16)
First of all I'll assume that you actually have a byte string rather than a hex string given; if your string is a hex string, you can trivially convert it to a byte string using gsub:
function hex2bytes(str)
-- assert that it is indeed a string of hex digit pairs
assert(#str % 2 == 0 and not str:match"[^%x]")
return str:gsub("%x%x", function(hex) return tonumber(hex, 16) end)
end
Now, let's convert this byte string to an integer. I'll assume little endian (least significant byte first); should your string be big endian (most significant byte first) you'll have to reverse it using str:reverse() before you read it.
Reading an unsigned integer is pretty straightforward:
function bytes2uint(str)
local uint = 0
for i = 1, #str do
uint = uint + str:byte(i) * 0x100^(i-1)
end
return uint
end
I'll assume your integers are stored using Two's complement. In this case the higher 2^n values (equivalent to the first bit being set or the value being >= 2^(n-1)) the uint can take represent negative numbers, with the smallest value (2^(n-1)) representing the largest negative value (-2^(n-1)). Thus you can simply subtract the unsigned value from 2^n, the (exclusive) max value for the uint:
function bytes2int(str)
local uint = bytes2uint(str)
local max = 0x100 ^ #str
if uint >= max / 2 then
return uint - max
end
return uint
end
Here is a simple bpftrace script:
#!/usr/bin/env bpftrace
tracepoint:syscalls:sys_enter_kill
{
$tpid = args->pid;
printf("%d %d %d\n", $tpid, $tpid < 0, $tpid >= 0);
}
It traces kill syscalls, prints the target PID and two additional values: whether it is negative, and whether it is non-negative.
Here is the output that I get:
# ./test.bt
Attaching 1 probe...
-1746 0 1
-2202 0 1
4160 0 1
4197 0 1
4197 0 1
-2202 0 1
-1746 0 1
Weirdly, both positive and negative pids appear to be positive for the comparison operator.
Just as a sanity, check, if I replace the assignment line with:
$tpid = -10;
what I get is exactly what I expect:
# ./test.bt
Attaching 1 probe...
-10 1 0
-10 1 0
-10 1 0
What am I doing wrong?
As you've discovered, bpftrace assigns a u64 type to your $tpid variable. Yet, according to the tracepoint format doc., args->pid should be of type pid_t, or int.
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_kill/format
name: sys_enter_kill
ID: 185
format:
field:unsigned short common_type; offset:0; size:2; signed:0;
field:unsigned char common_flags; offset:2; size:1; signed:0;
field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
field:int common_pid; offset:4; size:4; signed:1;
field:int __syscall_nr; offset:8; size:4; signed:1;
field:pid_t pid; offset:16; size:8; signed:0;
field:int sig; offset:24; size:8; signed:0;
print fmt: "pid: 0x%08lx, sig: 0x%08lx", ((unsigned long)(REC->pid)), ((unsigned long)(REC->sig))
The bpftrace function that assigns this type is TracepointFormatParser::adjust_integer_types(). This change was introduced by commit 42ce08f to address issue #124.
For the above tracepoint description, bpftrace generates the following structure:
struct _tracepoint_syscalls_sys_enter_kill
{
unsigned short common_type;
unsigned char common_flags;
unsigned char common_preempt_count;
int common_pid;
int __syscall_nr;
u64 pid;
s64 sig;
};
When it should likely generate:
struct _tracepoint_syscalls_sys_enter_kill
{
unsigned short common_type;
unsigned char common_flags;
unsigned char common_preempt_count;
int common_pid;
int __syscall_nr;
u32 pad1;
pid_t pid;
u32 pad2;
int sig;
};
bpftrace seems to be confused by the size parameter that doesn't match the type in the above description. All syscall arguments get size 8 (on 64-bit at least), but that doesn't mean all 8 bytes are used. I think it would be worth opening an issue on bpftrace.
There is something strange going on with integer types in bpftrace (see #554, #772, #834 for details).
It seems that in my case arg->pids gets treated as a 64-bit value by default, while it is actually not. So the solution is to explicitly cast it:
$tpid = (int32)args->pid;
And now it works as expected:
# bpftrace test.bt
Attaching 1 probe...
-2202 1 0
-1746 1 0
-2202 1 0
4160 0 1
4197 0 1
I would want to ask, how do you break up a 32-bit hex (for example: CEED6644) into 4 bytes (var1 = CE, var2 = ED, var3 = 66, var4 = 44). In QB64 or QBasic. I would use this to store several data bytes into one array address.
Something like this:
DIM Array(&HFFFF&) AS _UNSIGNED LONG
Array(&HAA00&) = &HCEED6644&
addr = &HAA00&
SUB PrintChar
SHARED addr
IF var1 = &HAA& THEN PRINT "A"
IF var1 = &HBB& THEN PRINT "B"
IF var1 = &HCC& THEN PRINT "C"
IF var1 = &HDD& THEN PRINT "D"
IF var1 = &HEE& THEN PRINT "E"
IF var1 = &HFF& THEN PRINT "F"
IF var1 = &H00& THEN PRINT "G"
IF var1 = &H11& THEN PRINT "H"
And so on...
You could use integer division (\) and bitwise AND (AND) to accomplish this.
DIM x(0 TO 3) AS _UNSIGNED _BYTE
a& = &HCEED6644&
x(0) = (a& AND &HFF000000&) \ 2^24
x(1) = (a& AND &H00FF0000&) \ 2^16
x(2) = (a& AND &H0000FF00&) \ 2^8
x(3) = a& AND &HFF&
PRINT HEX$(x(0)); HEX$(x(1)); HEX$(x(2)); HEX$(x(3))
Note that you could alternatively use a generic RShift~& function instead of raw integer division since what you're really doing is shifting bits:
x(0) = RShift~&(a& AND &HFF000000&, 18)
...
FUNCTION RShift~& (value AS _UNSIGNED LONG, shiftCount AS _UNSIGNED BYTE)
' Raise illegal function call if the shift count is greater than the width of the type.
' If shiftCount is not _UNSIGNED, then you must also check that it isn't less than 0.
IF shiftCount > 32 THEN ERROR 5
RShift~& = value / 2^shiftCount
END FUNCTION
Building upon that, you might create another function:
FUNCTION ByteAt~%% (value AS _UNSIGNED LONG, position AS _UNSIGNED BYTE)
'position must be in the range [0, 3].
IF (position AND 3) <> position THEN ERROR 5
ByteAt~%% = RShift~&(value AND LShift~&(&HFF&, 8*position), 8*position)
END FUNCTION
Note that an LShift~& function was used that shifts bits to the left (multiplication by a power of 2). A potentially better alternative would be to perform the right-shift first and just mask the lower 8 bits, eliminating the need for LShift~&:
FUNCTION ByteAt~%% (value AS _UNSIGNED LONG, position AS _UNSIGNED BYTE)
'position must be in the range [0, 3].
IF (position AND 3) <> position THEN ERROR 5
ByteAt~%% = RShift~&(value, 8*position) AND 255
END FUNCTION
Incidentally, another QB-like implementation known as FreeBASIC has an actual SHR operator, used like MOD or AND, to perform a shift operation directly instead of using division, which is potentially faster.
You could also use QB64's DECLARE LIBRARY facility to create functions in C++ that will perform the shift operations:
/*
* Place in a separate "shift.h" file or something.
*/
unsigned int LShift (unsigned int n, unsigned char count)
{
return n << count;
}
unsigned int RShift (unsigned int n, unsigned char count)
{
return n >> count;
}
Here's the full corresponding QB64 code:
DECLARE LIBRARY "shift"
FUNCTION LShift~& (value AS _UNSIGNED LONG, shiftCount AS _UNSIGNED _BYTE)
FUNCTION RShift~& (value AS _UNSIGNED LONG, shiftCount AS _UNSIGNED _BYTE)
END DECLARE
x(0) = ByteAt~%%(a&, 0)
x(1) = ByteAt~%%(a&, 1)
x(2) = ByteAt~%%(a&, 2)
x(3) = ByteAt~%%(a&, 3)
END
FUNCTION ByteAt~%% (value AS _UNSIGNED LONG, position AS _UNSIGNED BYTE)
'position must be in the range [0, 3].
IF (position AND 3) <> position THEN ERROR 5
ByteAt~%% = RShift~&(value, 8*position) AND 255
END FUNCTION
If QB64 had a documented API, it might be possible to raise a QB64 error from the C++ code when the shift count is too high, rather than relying on the behavior of C++ to essentially ignore shift counts that are too high. Unfortunately, this isn't the case, and it might actually cause more problems than it's worth.
This snip gets the byte pairs of a hexidecimal value:
DIM Value AS _UNSIGNED LONG
Value = &HCEED6644&
S$ = RIGHT$("00000000" + HEX$(Value), 8)
PRINT "Byte#1: "; MID$(S$, 1, 2)
PRINT "Byte#2: "; MID$(S$, 3, 2)
PRINT "Byte#3: "; MID$(S$, 5, 2)
PRINT "Byte#4: "; MID$(S$, 7, 2)
I was solving this codechef problem on Fibonacci numbers. It says number is of 1000 digits then why it is not causing integer overflow in tester's solution when it is scanning the array and storing it in unsigned long long int. I can't understand how solution is working. Below is the problem and tester's solution.
The Head Chef has been playing with Fibonacci numbers for long . He has learnt several tricks related to Fibonacci numbers . Now he wants to test his chefs in the skills .
A fibonacci number is defined by the recurrence :
f(n) = f(n-1) + f(n-2) for n > 2
and f(1) = 0
and f(2) = 1 .
Given a number A , determine if it is a fibonacci number.
Input
The first line of the input contains an integer T denoting the number of test cases. The description of T test cases follows.
The only line of each test case contains a single integer A denoting the number to be checked .
Output
For each test case, output a single line containing "YES" if the given number is a fibonacci number , otherwise output a single line containing "NO" .
Constraints
1 ≤ T ≤ 1000
1 ≤ number of digits in A ≤ 1000
The sum of number of digits in A in all test cases <= 10000.
Example
Input:
3
3
4
5
Output:
YES
NO
YES
**Tester's solution:**
#include <iostream>
#include <cstdio>
#include <algorithm>
#include <set>
#include <cstring>
using namespace std;
int const mx = 6666;
set <unsigned long long> f;
unsigned long long fib[mx + 10];
char s[mx + 1];
int main(){
// freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
fib[0] = 0;
fib[1] = 1;
f.insert(1);
f.insert(0);
int i;
for (i = 2; i <= mx; i++){
fib[i] = fib[i - 1] + fib[i - 2];
f.insert(fib[i]);
}
int tc;
cin>>tc;
while (tc--){
unsigned long long n = 0, ten = 10;
cin>>s;
int len = strlen(s);
for (i = 0; i < len; i++){
char q = s[i];
unsigned long long a = q - '0';
n = n * ten + a;
}
if (f.find(n) == f.end()) printf("NO\n");
else printf("YES\n");
}
return 0;
}
From cplusplus you will see that,
ULLONG_MAX Maximum value for an object of type unsigned long long int is 18446744073709551615 (264-1) or greater.
The actual value depends on the particular system and library implementation, but shall reflect the limits of these types in
the target platform.
Above information is just to let you know its a BIG number. Moreover the cause of not getting overflow is not the limit i mentioned.
Most probably, the input file of judge does not contain any input that can cause an overflow.
And its still possible to set such input even after fulfilling the conditions,
1 ≤ T ≤ 1000
1 ≤ number of digits in A ≤ 1000
The sum of number of digits in A in all test cases <= 10000.
I have a specific requirement to convert a stream of bytes into a character encoding that happens to be 6-bits per character.
Here's an example:
Input: 0x50 0x11 0xa0
Character Table:
010100 T
000001 A
000110 F
100000 SPACE
Output: "TAF "
Logically I can understand how this works:
Taking 0x50 0x11 0xa0 and showing as binary:
01010000 00010001 10100000
Which is "TAF ".
What's the best way to do this programmatically (pseudo code or c++). Thank you!
Well, every 3 bytes, you end up with four characters. So for one thing, you need to work out what to do if the input isn't a multiple of three bytes. (Does it have padding of some kind, like base64?)
Then I'd probably take each 3 bytes in turn. In C#, which is close enough to pseudo-code for C :)
for (int i = 0; i < array.Length; i += 3)
{
// Top 6 bits of byte i
int value1 = array[i] >> 2;
// Bottom 2 bits of byte i, top 4 bits of byte i+1
int value2 = ((array[i] & 0x3) << 4) | (array[i + 1] >> 4);
// Bottom 4 bits of byte i+1, top 2 bits of byte i+2
int value3 = ((array[i + 1] & 0xf) << 2) | (array[i + 2] >> 6);
// Bottom 6 bits of byte i+2
int value4 = array[i + 2] & 0x3f;
// Now use value1...value4, e.g. putting them into a char array.
// You'll need to decode from the 6-bit number (0-63) to the character.
}
Just in case if someone is interested - another variant that extracts 6-bit numbers from the stream as soon as they appear there. That is, results can be obtained even if less then 3 bytes are currently read. Would be useful for unpadded streams.
The code saves the state of the accumulator a in variable n which stores the number of bits left in accumulator from the previous read.
int n = 0;
unsigned char a = 0;
unsigned char b = 0;
while (read_byte(&byte)) {
// save (6-n) most significant bits of input byte to proper position
// in accumulator
a |= (b >> (n + 2)) & (077 >> n);
store_6bit(a);
a = 0;
// save remaining least significant bits of input byte to proper
// position in accumulator
a |= (b << (4 - n)) & ((077 << (4 - n)) & 077);
if (n == 4) {
store_6bit(a);
a = 0;
}
n = (n + 2) % 6;
}